Saturday, August 28, 2010

Practical Lessons of Reflecting on the Competing Usage of Religion

A recurring question during my philosophy of religion class on Friday had to do with what the practical lessons would be if I’m right (or on the right track) about “religion” being a bifurcated essentially contested concept—that is, a concept which is not only used in competing ways within a community of discourse (based on disagreement over which features of the religion paradigms justify the appraisive connotations of the term) but has come to be used in incommensurate ways by competing communities of discourse (insofar as each community attaches an opposing appraisive connotation to the term).


One of the points I’ve been stressing in relation to this analysis is that, while ordinary essential contestability can be valuable insofar as it prevents legitimate voices in moral dispute from being silenced through definitional fiat, the bifurcation of an essentially contested concept is not similarly valuable—but it may be a reality we have to come to grips with.

But how? Put another way, what do we do in the face of the fact that “religion” has come to be used in such conflicted ways?

After class, in conversation with a few students, I enumerated two lessons, but there are surely more. Here are the two I identified:

1) It is helpful, both when critiquing or defending “religion,” to specify what sense one has in mind and to be clear that what one has to say may not apply to religion in other senses.

2) It is not helpful, when someone else is critiquing or defending religion in some sense that they specify, to say, “But that’s not religion, so you defense/critique is irrelevant.”

(By the way, I suspect I may have been at least occasionally guilty of the latter—although sometimes what I intend to say is, “That’s not the only kind of religion, and so treating your critique as a condemnation of religion in every plausible sense is a mistake”).

But these are hardly the only lessons that might be drawn. So let me throw the question out to readers of this blog: If “religion” is a “bifurcated essentially contested concept,” what steps do we need to take to facilitate productive dialogue about the phenomena that—in competing and conflicted ways—fall within the scope of the term's divergent use?

Thursday, August 26, 2010

The Concept of "Religion"

In my philosophy of religion class yesterday I gave everyone in the class a chance to give their own concise answer to the following question: “What is religion?” (To be more precise, I asked them to imagine they were being interrogated by space aliens, and that the fate of the Earth depended on their answer).


Not surprisingly, there were many diverse responses—some emphasizing social and institutional phenomena, some emphasizing beliefs or ways of looking at the world, some emphasizing practices or ways of life, and some stressing inner spiritual experience. Some definitions were, I’d say, quite gilded—that is, they used language aimed at highlighting the beauty or value of the thing being defined. Other definitions were quite the opposite. For example, one student defined religion as a system for justifying the exclusion or marginalization of people from a community.

Once I had the chalkboard covered with these various accounts, I pointed out how this diversity is also represented among scholars—with understandings of religion ranging from the more private, personal, “feeling”-oriented understanding (favored by theologian Friedrich Schleiermacher and philosopher/psychologist William James), to more sociological understandings (promulgated by, for example, Emile Durkheim).

I then spent a few minutes considering the idea I advanced in my book—namely, that “religion” is what Wittgenstein calls a “family resemblance” term (see p. 15 of Is God a Delusion? for an account of this idea). Then, in the last few minutes of the class, I turned to another approach—one that, based on some further reflection I’ve done since writing my book, I’m becoming increasingly convinced is the right one. According to this approach, “religion” is best understood as what philosopher W. B. Gallie called an “essentially contested concept”—but with a twist.

Since I didn’t have time to fully explain this idea in class, I want to do so in this post. In fact, I’ve already done so on this blog—here and here. But since it’s always helpful to try to explain ideas in different ways, let me have another go at it here.

What Gallie noticed was that there are some terms whose proper use, rather than being determined by an established definition (one that sets out the necessary and sufficient conditions for something to fall within the term’s scope), is instead determined by a shared set of complex exemplars or paradigms along with a shared appraisive meaning. So for example, there isn’t a common definition of “rape.” Instead, there are a bunch of exemplars—sexual acts that we all agree count as rape—together with general agreement that when an act is labeled “rape” there’s a strongly negative appraisal that goes along with that.

Here’s the thing about “rape.” It just isn’t and never will be a neutral, purely descriptive term. To call something rape is (among other things) to condemn it in a particular way. That condemnation is part of the meaning of the term. And so it matters a lot whether or not a particular act qualifies as rape. Acts of rape are morally worse than other classes of sexual acts (such as seduction, say, or aggressive lovemaking, or adultery).

The paradigms of “rape” exist because there are a bunch of things that we all agree deserve to be condemned in this distinctive way. But these paradigms are complex. They have lots of different features. And we don't all agree on what it is about these paradigms that makes them deserving of the negative appraisal. And this means that there are controversial cases.

Consider: A guy keeps pressuring his high school girlfriend to have sex. She doesn’t want to. He threatens to break up with her. She closes in on herself. He backs off for a few minutes, then begins groping her again. She doesn’t resist. He undresses her. She remains totally passive and unresponsive. He puts on a condom and penetrates her.

Is it rape? More people would be inclined to say “yes” today than twenty years ago—but there are still many who’d say it isn’t, that the guy is being insensitive but isn’t a rapist.

The reason for the dispute is that there isn’t agreement about whether the boys behavior in this case deserves the negative implications of the “rape” label. In other words, this is a moral dispute about what warrants a certain kind of negative appraisal.

And moral disputes can’t be resolved through definitional fiat. Suppose someone says, “From now on, rape will mean an act in which someone uses physical force to overcome a woman who is actively resisting sexual penetration. As such, the case at hand isn’t rape.” Such a move isn’t going to just be accepted. Why? Because to call something “rape” is to say that there's a certain kind of “badness” to it—more precisely, the same kind of badness that the agreed paradigms of rape possess. And so, to define rape as “an act in which someone uses physical force to overcome a woman who is actively resisting sexual penetration” is to say, in effect, that only acts which meet these conditions are bad in the relevant way. Put another way, to define “rape” is to take a stand in a moral dispute.

And as long as there is moral dispute, to impose a uniform definition of “rape” on a community of speakers is to impose one disputed answer to a moral question on everyone in the community. This wouldn’t be merely an act of establishing a linguistic convention. It would be an act of using language to truncate debate and to effectively delegitimize certain moral views.

And this is why some concepts become essentially contested. Their being essentially contested is a good thing—a way to keep some voices in a moral debate from being illegitimately silenced through definitional fiat.

My claim is that this idea of essential contestability is useful for understanding religion—but not if we accept Gallie’s idea without modification. Religion, I think, is an essentially contested concept with a twist. And what’s the twist? Here’s how I explain it in a forthcoming article (“Moving the Goal Posts?” to be published in Philo: A Journal of Philosophy):

But unlike “art,” whose appraisive meaning is positive, or “terrorism,” whose appraisive meaning is negative, “religion” has come to be used such that there are two competing communities of discourse, each using the term in an essentially contested way. But whereas one community of discourse treats “religion” as a positive appraisive concept and seeks to gauge which features of the paradigms warrant the positive appraisal, the other treats it as a negative one and seeks to judge which features warrant the negative appraisal. When a concept comes to be used in this way, we might call it a “bifurcated essentially contested concept.”
Unlike essentially contested concepts as Gallie understood them, I’m not at all convinced that bifurcated essentially contested concepts serve a useful function. When an essentially contested concept becomes “bifurcated,” what happens? On the one hand, you have those who attach a positive appraisive meaning to the paradigms of religion. They will be formulating their definition of religion by looking for what it is about the paradigms of religion that justifies the positive appraisal (and so will sift out of their understanding of religion anything in the paradigms that warrants a negative appraisal). On the other hand, those who attach a negative appraisive meaning to "religion" will be doing to opposite. The result may be that you have two parties with virtually identical value systems, who therefore make the same appraisive judgments about the various features of the religious paradigms—but who appear to be utterly at odds. An analogy—again from my forthcoming article—can be helpful:

It’s as if one community of discourse attaches to the term “sex” the appraisive meaning that typically attaches to “rape,” while another attaches to it the appraisive sense of “making love.” The former group looks at the range of phenomena that go by the label “sex” (ignoring, of course, those phenomena which no one would ever call rape) and tries to identify what justifies the negative appraisal. The latter does the same (ignoring the phenomena, such as rape paradigms, which no one would ever call “making love”), in the attempt to identify the parameters within which the positive appraisal is warranted. The latter holds up its results, saying, “This is the kind of sex (by which we mean making love) that deserves label!” The former protests, “That’s not sex (by which we mean rape) at all!”
This, I think, is what’s going on in the conversation between Christopher Hitchens and Unitarian Universalist minister Marilyn Sewell, whose unusual debate inspired one of my recent Religion Dispatches articles. It may also help to explain some of the common charges leveled against my book—charges to the effect that I respond to the new atheists by coming up with this definition of religion that has nothing to do with real religion as it exists in the real world.

Of course, what I defend in my book has a great deal to do with actual religions—but when I look at those real-world phenomena, I’m trying to identify the features which might justify a positive appraisal (what I call the germ of a true religion that might be salvaged from the crud of “superstition” and “fundamentalism” and “religionism”). My critics, meanwhile, are sifting through the same phenomena in an attempt to identify what makes religion so bad. And what do they pinpoint? What, from my standpoint, is the crud from which true religion needs to be salvaged. And so they’re holding up the crud and calling it religion, while I’m holding up the gem that was buried in the crud. And they protest, “That’s not religion at all!”

Wednesday, August 25, 2010

Considering the Place of Intuitions in Philosophical Reasoning

A key issue that comes up repeatedly on this blog also strikes me as crucial for understanding the philosophical method—and so seems a fitting topic both for students of mine who are about to dig into my philosophy of religion course and for regular readers of this blog. The issue is this: When it is reasonable to trust our intuitions? Put another way, when is it appropriate to make use of an intuitive judgment as a premise in an argument, thereby treating it as a reason to believe a conclusion?


Of course, all of us agree that our intuitions can be mistaken. But does it follow that we are never warranted in making use of them? Is it even possible to refuse to make use of our intuitions—or is it, rather, the case that all of us inevitably appeal to intuitive judgments (but, perhaps, are mostly unaware that we are doing so, because the judgments seem so obvious to us that we don’t even notice we’re assuming them?)

I suspect the latter is true, especially when we are wrestling with philosophical questions—questions which, typically, cannot be answered based on sensory observation alone. In my own experience, everyone who engages in philosophical discussions and debates has intuitions that they are making use of—but not everyone recognizes their own intuitive presuppositions. In a sense, our intuitive starting points operate as the lenses through which we look at our world. Since we’re looking through them, they often become invisible to us. Part of what philosophers strive to do (with greater or lesser degrees of success) is to explicate what these starting points are. And in many cases the most valuable outcome of philosophical debate is that participants come away from them more fully aware of their own intuitive starting points than they were before—as well as more aware of how looking through those “lenses” colors their experience.

Of course, many of the premises that philosophers make use of in their arguments are drawn from observation. Sometimes they are observations of the most general kind (for example, the first of Aquinas’ “Five Ways”—his initial arguments for the existence of something with God-like properties—begins with the premise that there are things that undergo change). And philosophers will also make use of principles that are matters of logic (for example, the principle that something cannot be both the case and not the case at the same time in the same way, or the principle that if A and B are the identical thing, then everything that is true of A is also true of B).

But often enough, a premise in a philosophical argument will be neither of these things. Instead, it will be something that, while neither a matter of logic nor based on observation, just seems right (at least to the philosopher advancing the argument). In some cases the premise is thought to be self-evident. For example, Leibniz appeals at several points in his philosophical arguments to what he calls “The Principle of Sufficient Reason”—roughly, the principle that for everything that is the case, there is a reason why it, rather than something else, is the case. He treats this as a “first principle”—a self-evident starting point for reasoning about things.

And Leibniz isn’t alone. Richard Dawkins, in The God Delusion, offers an argument against the existence of God that depends on a principle Dawkins is only partly explicit about. The principle runs roughly as follows: “In order for an instance of organized complexity to be adequately explained by an intelligent designer, the intelligent designer—whether material or immaterial—must be at least as complex as that which is being explained.”

This principle isn’t a matter of logical necessity, and it certainly isn’t a matter of empirical observation (how many immaterial intelligent designers have we observed so as to ascertain that they consistently display the property of being at least as complex as what they have designed?). So why does Dawkins accept this principle? Because it just seems right to him. In other words, he has a strong intuition that it is true.

Often, philosophers rely on thought experiments whose most important function is to serve as “intuition pumps”—that is, their purpose is to help us get clear on what our intuitions are. In other words, the purpose of these thought experiments is to help us pinpoint what “just seems right” to us, to make these assumptions explicit--in part so that we can make deliberate use of them in our subsequent reasoning as opposed to relying on them implicitly without noticing that that’s what we’re doing; in part because only once we are conscious of our intuitive starting-points will we make them available for critical scrutiny.

And this leads to my next point. If reliance on intuitions is inevitable—but iour ntuitions are fallible—we are faced with an important question: When should we trust our intuitions and make use of them, and when shouldn’t we? Put another way, when is an intuition a good reason for me to believe something, and when isn’t it?

At this point I think it’s important to distinguish between two kinds of intuitions. First, your mind may leap ahead of your plodding intellect to a conclusion that, in a sense, you believe “intuitively.” But in such cases, the intuition presents itself as a kind of research project: You have a sense (an “intuition”) that the body of evidence (or the rules of logic, or the basic doctrines of a belief system) supports this conclusion—but you still need to do the work of showing that it does. And once you take the time and effort to pursue that work, your intuition might be vindicated or undermined. In either case, you no longer believe it intuitively.

That’s not the kind of intuition I want to focus on here. Rather, I want to focus on the kind of intuition that serves as a foundation for thinking and critical reflection. I have in mind beliefs that just seem right to us in themselves, that we have a strong confidence in, but which we don’t believe on the basis of other things. The point is that we all have such intuitive starting points. But having them is no guarantee of their truth…and yet it would be impossible, I contend, to operate in the world without trusting these intuitions at least some of the time. So when do we, and when don’t we, trust them?

Now I don’t think I can, in a blog post, provide a fully satisfying answer to this question and then show that it’s the right one. But I do want to sketch out an answer that I find compelling (based on my intuitions?)—in part so that others can better understand my perspective, and in part to stimulate discussion.

So, when is an intuition of mine a “good reason”—that is, when is it appropriate for me to make use of that intuition in my reasoning, reaching conclusions based on it, making decisions in the light of those conclusions, etc.?

Let me begin the sketch of my answer by making two suggestions. First, I want to suggest that the worth of a reason can be specific to a particular reasoner in a particular context such that what is a good reason for me to reach a certain conclusion, given my circumstances, may not be a good reason for you in your circumstances. This I think is going to be a characteristic feature of intuitions: That an intuition of mine is a good reason for me here and now does not imply that it must be a good reason for you—and so, if you don’t share this intuition, I am not warranted in regarding you as unreasonable.

In this respect, rock-bottom intuitions are different from, say, logical principles. If you deny the principle of noncontradiction, it may be entirely appropriate to call you irrational. If you consistently refuse to accept the clear implications of the most meticulous empirical observations consistently corroborated by the most highly trained researchers, I might be justified in calling you irrational. But if you don’t accept Dawkins’ intuitive principle about complexity or Leibniz’s Principle of Sufficient Reason, I’m not convinced that a judgment of irrationality is going to be appropriate. Put another way, there are some things about which reasonable people can disagree—and intuitions are among them.

My second suggestion is this: While an  intuition can be a good reason for me even if it is something I could be wrong about, it needn't be. The fallibility of my basic intuitions imposes important constraints on when I can legitimately make use of them in my reasoning and when I cannot.

Because intuitions are fallible, I don’t think one should hold to them fanatically. One should, in other words, be open to evidence and arguments that might refute them (or that might shake your intuitive judgment enough that they no longer seem so intuitively right to you). But in the absence of such evidence or arguments, if I have a strong intuition that something is the case, then I may treat it as a premise in my thinking when the implications of doing so should my intuition prove mistaken are benign (or are no more pernicious than the implications of setting the intuition aside). However, when the implications of trusting my intuition are not benign (or are less benign than the implications of setting the intuition aside), I am not warranted in making use of the intuition as if it were a reliable premise.

To put this more succinctly, intuitions can face evidential “defeaters” (evidence that counts against the truth of the intuition) and pragmatic ones (practical circumstances which make it too risky to trust the intuition).

Let me focus a bit more on the latter. Whether acting on a mistaken intuition has benign or malignant implications may vary according to context—in one situation it may be entirely harmless to trust an intuition even should it prove to be wrong, while in another context the costs of trusting the very same intuition (should the intuition prove mistaken) are grave. But we also need to consider opportunity costs: what benefits are lost if one refrains from trusting an intuition and the intuition proves to be sound? In some cases, there are costs or benefits that emerge regardless of the intuition’s truth—that is, there may be benefits to trusting the intuition even if the intuition proves false (or costs to trusting it even if it should happen to be true).

But hovering over all such pragmatic assessment of intuitions is the difficult fact that it relies on an evaluative framework of some kind. When you say that the costs of trusting an intuition should it prove to be mistaken are high, you are making a value judgment about the consequences of believing an intuition in error. How do we determine what is the right evaluative framework to use? Moral intuitions? You see the problem, I hope.

But instead of pursuing this problem here, let me explore more deeply how pragmatic assessment of intuitions might work by considering an example—a case in which mistakenly trusting one’s intuitions is (at least within my evaluative framework) not benign. Having recently finished reading Bernard Beckett’s short novel, Genesis, a particular scenario from that book comes immediately to mind—and it seems a particularly fitting one because it potentially poses pragmatic challenges to some of my own intuitions, ones that we’ve been talking about in connection with my recent series of posts on materialist conceptions of mind.

(It’s also fitting because it might serve as free advertising for Bernard’s novel, which in addition to being thought-provoking on a philosophical level also earns the high praise of having kept me up significantly past my bedtime).

At the heart of the story is the relationship between two characters—one human, one android. The human, Adam Forde, has violated the laws of the Republic in which he lives in a way that makes him a focal figure in the Republic’s internal turmoil. Neither executing him nor letting him go are safe options from the standpoint of the authorities, and so they pursue a compromise: he is locked away with an experimental android prototype, Art, for the purposes of exposing it to stimulation that will facilitate its cognitive development.

Let’s suppose Adam knows a fair bit about Art’s internal circuitry (I don’t think this is true of Adam in the novel, but let’s assume it). Suppose, furthermore, that based on this knowledge he has a strong intuitive sense that nothing in that circuitry could account for the presence of consciousness.

He might, of course, have the same intuition about the human brain: nothing about the physical system of the brain can, by itself, account for the existence of this thing called consciousness. But, like me, he'd also know that he is conscious, and that his conscious states are demonstrably correlated with brain states. Perhaps he reconciles these facts with his intuition about the inability of a physical system alone to explain consciousness by positing that there is something about the brain which attunes it to some non-physical reality. Although he has no idea what this non-physical reality is like, his strong intuitive sense that a physical system alone cannot account for the consciousness he's so intimately acquainted with leads him to believe there must be some mysterious additional component at work—but that there's also something about the brain’s unique properties that makes the connection with this non-material element possible. (We would have to suppose, furthermore, that he lacks a different intuition that some people do seem to have—namely, that if there is some essentially “spiritual” or non-physical reality, it couldn’t interact with a physical one, at least not in the way necessary to generate consciousness as we know it).

Of course, if all of this is true of Adam, then he might also think the same things about Art’s circuitry—that is, he might believe that any appropriately structured physical system could do the same work that the brain does, such that there is no reason in principle why an artificial intelligence could not be created. But let’s suppose he knows a fair bit about Art’s circuitry—and not only is it unlike the biological circuitry of the brain, but its design is of a sort that (perhaps based on a thought experiment similar, perhaps, to Searle’s “Chinese Room”) Adam has a strong intuitive sense cannot do the sort of consciousness-generating work a brain can do. And so, Adam's intuitions lead to the conclusion that, at best, all that Art's circuitry can do is mimic the behavior of a conscious being.

Now it may be that after prolonged interaction with Art, Adam accumulates a body of data that really challenges these underlying intuitions. Perhaps his interaction with Art has a flow to it that just doesn’t fit with the hypothesis that Art is merely mimicking consciousness. The nuances of their exchanges just seem too hard to “fake.” And so, eventually, he reaches a point at which his intuitions have been defeated by a body of experiential evidence. If so, it would no longer be reasonable for him to invoke his original intuitions as premises—he'll be forced to conclude that one of another of them must be set aside, since taken together they imply a conclusion that he has strong evidential grounds for disbelieving. His intuitions have suffered evidential defeat.

(This, by the way, seems to be what actually happens to Adam in the story.)

But in the first days of his imprisonment with Art, his intuitions will not yet have been defeated in this way. At that point, it may be reasonable for him to trust them—but that depends on the pragmatic assessment of the associated costs. And those costs are a matter of context. My circumstances are quite unlike those that Adam faces. Among other things, I'm not sharing living quarters with an artificial intellegence that behaves as if it is conscious. And that difference might matter a great deal for whether Adam is warranted in trusting his intutions.

In fact, in this case I think it would be unreasonable for him to accept his intuitions, even though they haven’t been evidentially defeated—because operating as if they are true (and hence as if Art is not a conscious being) is more costly, should the intuitions prove mistaken, than operating as if they are false.

Here’s my thinking. If Adam operates as if Art is not conscious and he's mistaken, there is a real cost of considerable moral significance—as I understand it, the cost would be a failure to treat a conscious being with the dignity that such a being deserves. To treat a conscious being as if it were an unconscious one is to objectify it. In a sense, this is what distressed me about Adam’s early treatment of Art in the novel: in their first days together, Adam is happy to act on the assumption that Art is no more conscious than a toaster (although there is some evidence that, even at this early stage, Adam isn’t entirely confident in this assumption--he's drawn into conversation with the android, but almost as if to repudiate himself for acting as if this were a conscious being he strikes out against it in an act of physical violence).

One wonders whether this early treatment by Adam may have played an important role in Art’s subsequent development—if you’re conscious, the experience of being objectified, treated like a thing, creates both wounds and needs that can have long-term negative repercussions, ones that often propagate outwards onto others. (If you want to know more about Art’s development, read the book.)

On the other hand, were Adam to operate as if Art is a conscious being and is mistaken, the costs don’t seem to be comparable. And so, in this case, there is a pragmatic reason for Adam to set aside his intuitions, even if they have not been evidentially defeated--because they have been pragmatically defeated instead.

It is, by the way, this sort of thinking that I believe underlies a comment that philosopher Peter Singer once made to me. (Peter Singer is a renowned moral philosopher who is most famous for having written Animal Liberation, a book that helped to launch the animal rights movement). Back in the '90’s Singer visited the university where I was teaching, and the philosophy department had a dinner for him at the home of one of our department members—a vegetarian dinner, of course. During the meal I asked him what he thought about eating snails.

Now, Singer’s argument for vegetarianism hinges on his case for saying that if a non-human animal has interests, its interests should weigh as heavily in our moral deliberations as the comparable interests of a human. Since pigs and cows and chickens all have interests—as evinced by their capacity to suffer—their interests need to be given the same moral weight as ours. My question to Singer was therefore really a question about whether he thought snails (and other animals with neurological systems far simpler than those of pigs and chickens, etc.) met this criterion of being interest-bearers, a condition that Singer (unlike certain environmental philosophers) doesn’t think can be met in the absence of consciousness--leading him to conclude that plants do not have interests.

Singer’s answer? “I don’t know if snails have interests, but I give them the benefit of the doubt.”

Monday, August 23, 2010

Pairing this Blog with My Philosophy of Religion Course: Reading List and Other Issues

As I’ve mentioned before, my plan for the coming semester (which begins today) is to deliberately pair my blog posts with current topics I’m covering in my philosophy of religion class. I will be inviting students enrolled in the class to read and comment on blog posts as a way of deepening their understanding of and critical engagement with course materials (whether any will take me up on this has yet to be seen). I may also give students in the class the opportunity, if they wish, to submit “guest posts” for possible inclusion on the blog.


In general, I do not expect that the posts here will simply rehash course lectures or serve as a substitute for coming to class (I mention this especially for those of my students who might be reading this). Sometimes, I will use the blog to develop more fully arguments and ideas I only gesture towards in lectures. Sometimes, I will use it as a venue for explaining in somewhat different terms course materials that, from experience, I know to be frequently misunderstood. Sometimes I will use it mainly as a discussion forum for continuing conversations started in class (with the post aimed primarily at provoking such discussions). And sometimes I will use it to apply or relate course ideas to topics of popular interest.

For those who regularly read and comment on this blog, I ask you to continue doing so as usual—but I also ask that you take special care to be respectful to other commenters. This is not to say that you shouldn’t criticize or challenge ideas and arguments that might come from my students. Please do. Part of the point of inviting my students to participate in the blog is the hope that they will participate in critical conversations in which all participants (including me) are challenged to deepen their thinking in the light of substantive criticisms. What I ask, however, is that we challenge ideas and arguments without casting aspersions on those who advance them. In other words, avoid ad hominem attacks. During this semester, if I think that a posted comment is abusive, I will be more likely to delete it than I would be at other times (unless the target of the abuse is me, in which case I’ll let it stay up).

Although visitors to this blog should be able to continue to read my posts just as they always have, some of you have expressed an interest in following along with the readings (or at least the topics) we’ll be covering in the course. For those interested in doing so, I include below a schedule of course topics and readings. “TGD” stands for The God Delusion (by Richard Dawkins); “IGAD” stands for Is God a Delusion? (by me), and GM stands for the anthology God Matters: Readings in the Philosophy of Religion (by Raymond Martin and Christopher Bernard). Since God Matters is priced as a textbook, those of you not enrolled in the course may not want to invest in the book, and may want to use the list of readings below for the purpose of tracking down related readings on your own. The plan of the course is to use the contemporary “God debates” sparked by the spate of “new atheist” bestsellers as a springboard for a deeper look at topics and controversies in the philosophy or religion—hence the use of Dawkins’ book and mine, especially early in the semester.


Aug. 23 TOPIC: The New Landscape of Philosophy of Religion

READINGS: TGD, Preface; IGAD, Introduction

Aug. 25-27 TOPIC: Terms of the Discussion, Part I: What is “Religion”?

READINGS: TGD, Ch. 1; IGAD, Ch. 1; Reitan, “Christopher Hitchens, Religious in Spite of Himself?

Aug. 30-Sept 1 TOPIC: Terms of the Discussion, Part II: The Concept of God

READINGS: TGD, Ch. 2; IGAD, Ch’s 2 & 3; GM 2 (Mavrodes, “Some Puzzles Concerning Omnipotence”); GM 5 (Boethius, “God is Outside of Time” from The Consolation of Philosophy)

Sept. 3 TOPIC: Are Sophisticated Religious Claims Meaningless?

READINGS: Anthony Flew, “Theology and Falsification”

Sept. 8-10 TOPIC: The Evidentialist Challenge to Religion

READINGS: GM 23 (Clifford, “It is Wrong to Believe Without Evidence”) & 20 (Flew, “The Presumption of Atheism”); IGAD Ch. 4

Sept. 13-15 TOPIC: Aquinas’s Five Ways: A Closer Look

READINGS: GM 10 (Aquinas, “Five Ways”); TGD, Ch. 3, pp. 77-79; IGAD, Ch. 5, pp. 101-105

Sept. 17-22 TOPIC: Science and Arguments from Design: Does Science Offer Evidence for God?

READINGS: GM 15 (Paley, “The Watchmaker”), 16 (Hume, “Critique of the Design Argument”), & 17 (Collins, “God, Design, and Fine-Tuning”); IGAD, Ch. 5, pp. 106-114, handouts

Sept. 24 TOPIC: Dawkins’ Anti-Theistic Argument: Is God’s Existence Unlikely in the Light of Science?

READINGS: TGD, Ch. 4; IGAD, Ch. 5, pp. 114-119.

Sept. 27-29 TOPIC: Ontological Arguments

READINGS: GM 7 (Anselm & Gaunilo, “The Ontological Argument”), 8 (Kant, “Critique…”), & 9 (Malcolm, “Anselm’s Ontological Arguments”)

Oct. 1-4 TOPIC: The Leibniz/Clark Cosmological Argument

READINGS: GM 11 (Yandell & Yandell, “The Cosmological Argument”) & 12 (Mackie, “Criticisms…”); IGAD Ch. 6

Oct. 6-11 TOPIC: The Nature and Authority of Religious Experience

READINGS: GM #41 (James, “Varieties of Religious Experience”), 43 (Alston, “Perceiving God”) & 44 (Scriven, “Critique…”); IGAD Ch. 7.

Oct. 15-18 TOPIC: Moral Arguments for God

READINGS: TGD Ch. 6; GM 18 (Lewis, “The Moral Argument…” & 19 (Mackie, “Critique…”)

Oct. 20 TOPIC: Challenging Evidentialism I: Kierkegaard’s Fideism

READINGS: GM 27 (Kierkegaard, “Religious Belief Requires a Leap of Faith”)

Oct. 22-25 TOPIC: Challenging Evidentialism II: Reformed Epistemology

READINGS: GM 29 (Sennett, “Reformed Epistemology…”) & 30 (Parsons, “An Atheist Perspective”)

Oct. 27-Nov. 1 TOPIC: Challenging Evidentialism III: Pragmatic Faith

READINGS: GM 25 (Pascal, “The Wager”) & 26 (James, “The Will to Believe”); IGAD Ch. 8

Nov. 3-5 TOPIC: The Logical Argument from Evil and its Demise: The Free Will Defense

READINGS: GM 37 (Mackie, “The Logical Problem of Evil”) & 38 (Plantinga, “The Free Will Defense”)

Nov. 8 TOPIC: The Evidential Argument from Evil

READINGS: GM 39 (Martin, “The Evidential Argument from Evil”)

Nov. 10 TOPIC: Meeting the Challenge of the Evidential Argument: The Task of Theodicy

READINGS: GM 34 (Hick, “Soul-Making Theodicy”)

Nov. 12 TOPIC: Questioning the Epistemic Ground of the Evidential Argument

READINGS: “Stephen Wykstra’s Response to the Evidentialist Argument” (Handout)

Nov. 15 TOPIC: Changing the Question: The Existential Problem of Evil, or How to Live with Horror

READINGS: IGAD Ch. 9; “Marilyn McCord Adams on Evil” (A Philosophy Bites Podcast); possible handout

Nov. 17-22 TOPIC: The Problem of Hell: A Case Study in Philosophical Theology

READINGS: C.P. Ragland, “Hell” (Internet Encyclopedia of Philosophy); Thomas Talbott, “God, Freedom, and Human Destiny”; possible handouts.

Nov. 29-Dec. 1 TOPIC: Does Religion do More Harm than Good?

READINGS: TGD Ch’s 7 & 8; IGAD Ch. 10 (by 12/1)

Dec. 3-8 TOPIC: Religious Diversity and its Significance

READINGS: GM 57 (Hick, “Religious Pluralism”), 59 (Meeker, “Exclusivism, Pluralism, and Anarchy”), & 60 (Stairs, “Religious Diversity and Religious Belief”)

Thursday, August 19, 2010

Materialist Conceptions of Mind, Part III: Self-Monitoring Brains and Strange Loops

This will be my last post before the start of the new academic semester—at which point most (if not all) of my blog posts will be deliberately paired with topics I’ll be covering in my philosophy of religion class. But before that starts I want to conclude my series of posts on materialist conceptions of consciousness.


In this final post in the series I want to consider what I’m calling “perspectivalism”—roughly, the idea that consciousness is to be identified with the way that brain states “look” from a distinctive internal perspective. But on this definition, perspectivalism needn’t be a materialist conception of consciousness at all.

A non-materialist version of perspectivalism would hold that consciousness is the way that brain states look to a “subject” that isn't reducible to anything physical. On this view, while every conscious state is correlated with a physical state of the brain, and while changes in the brain will always bring about corresponding changes to consciousness (in other words, while this theory of consciousness fully aligns with everything that science tells us about the relationship between mental and neurological phenomena), consciousness does not arise without the introduction of a non-material subject. Such non-material perspectivalism, as defined, does not specify what this non-material subject is (and hence allows for numerous variants), but only what it is not: any component or part of the brain.

A materialist version of perspectivalism, by contrast, shares the idea that states of consciousness are the way that brain states “appear” from a distinctive internal perspective. But the “observer” in this case--that which provides the perspective on the brain states--is the brain itself. In other words, consciousness emerges when the brain begins to monitor its own activity—when brain states begin to represent brain states.

To unpack this idea as best I can, I want to recall a comment from my previous post—which at the time was little more than an aside. The comment is one I made in relation to my extended example of the two instantiations of the photo of my children—one a “hard copy” on my desk, the other an electronically produced image on my computer screen. The image, I said then, is an emergent property of two disparate physical substrates. But then I made the following remark:

Arguably, the emergent property in either case doesn’t really “emerge” in the absence of an observer who has the capacity to find in the similar organizational structures of the two physical substrates a shared meaning. In other words, at least in some cases, emergence requires a subject who is capable of meaning-attributions. Without that observer, we have an arrangement of inkblots or of illuminated pixels, but we simply don’t have an image of my children. That requires someone to find meaning in the pattern—and neither a piece of photo paper with ink on it nor a computer can do that.
The idea here is that the common feature of both physical systems doesn’t really come into existence apart from a meaning-bestowing observer. The idea I want to follow up on here is that consciousness is what certain neurological patterns “become” to the right kind of meaning-bestowing observer—and that the brain itself is capable of being such an observer. Put more simply, conscious states emerge from the physical system of the brain when that brain “monitors” its own processes (in the sense of tracking and modeling them). On this view, it should be clear that perspectivalism is really a distinctive species of emergentism--but one which adds an additional element that is intended, presumably, to help close the explanatory gap that a bare emergentism leaves us with.

What I want to do in this post is explain why such self-monitoring activity cannot solve the problems that I have articulated with respect to the previous species of materialism (identificationism and emergentism). In other words, the arguments I will be developing against perspectivalism will presuppose the conclusions I have reached on the basis of arguments in the previous two posts. As such, I want to offer an initial qualifying remark about what I do and don’t believe I am accomplishing in the current post.

Specifically, I am well aware that the arguments already laid out against identificationism and perspectivalism are not convincing to everyone--and I don't think this post will add anything new to those arguments (and hence won't do anything to convince those who are skeptical of them). My aim here is simply to show that if you find these arguments convincing, then the same basic concerns that lie behind your rejection of identificationism and emergentism will lead you to reject material perspectivalism as well. Material perspectivalism doesn’t add anything new that can rehabilitate materialism if the problems with identificationism and emergentism are granted. In short, what I hope to show in this post (even to those who don’t find in the earlier species of materialism the same problems I find) is that those who do find the earlier species of materialism problematic in the ways I highlighted won’t discover in perspectivalism a way around those problems.

With this in mind, consider the hypothesis that consciousness is what happens when the brain monitors its own activity, so that its brain states become the object of the brain’s own scrutiny. Just as the image on a computer screen acquires a distinctive emergent property by virtue of there being a meaning-bestowing observer to attach meaning to the pattern on the screen, perhaps consciousness is a property that emerges when the brain observes its own states.

There are two main objections to this view. First, in order for the brain to serve as an observer that bestows meaning on its own brain states, we must solve the riddle of how a physical system can generate semantic content. In other words, we must first close the explanatory gap in order for this self-monitoring process to do the job we need it to do. But the job we were hoping the self-monitoring process would do is close this very gap. But if the gap must first be closed in order for the self-monitoring system to close the gap, it follows that the self-monitoring system cannot be what closes the gap.

And so, if you think that the explanatory gap between neurological systems and conscious states is the kind of gap that can’t be bridged without the addition of some external element—that is, if you accept the core objection to emergentism—then the materialist version of perspectivalism won’t work. Perspectivalism, it seems, adds nothing to the explanatory picture that will be convincing to those who are skeptical of the emergentist hypothesis.

The second problem with perspectivalism connects up with the chief objection to identificationism. As a reminder, that objection holds, in brief, that a conscious state cannot be identified with its corresponding brain process because the conscious state (the “quale”) has relational properties that the brain process do not have. For example, I can be familiar with the conscious state (the way the wasp sting on my ankle feels) even though I am entirely unacquainted with the underlying brain state. (The argument is more complicated than this—but for a fuller treatment, see the first post in this series and the subsequent discussion).

At least at first glance, it seems that materialist perspectivalism can avoid this problem with identificationism. After all, for perspectivalism the quale isn’t identified with the underlying brain state at all. Instead, it is identified with the way the brain state appears to a brain that’s engaged in self monitoring activity. And there is no difficulty with an appearance from a certain perspective having properties that the underlying cause of that appearance lacks (or vice versa). I can be immediately acquainted with the way that the northern lights look without knowing anything about the underlying physical reality (and vice versa).

But here’s the problem: on this perspectival view, it’s true enough that a quale is not identified with the corresponding brain event. But it is identified with a second-order brain event—one in which parts of the brain are undergoing brain processes in response to other brain processes.

But if this is right, the original problem just crops up at the next level. The reason why we couldn’t identify the quale with the first-order brain event was because we’re acquainted with the quale even when we are entirely unacquainted with the underlying brain event. But surely the very same thing can be said about the second-order brain event, generating the very same problem. One could try to perform the same move again, by identifying the quale with the way that the second-order "monitoring" brain event looks from a certain vantage point—but if that vantage point is the one provided by a third-order brain event, we’d just be moving the problem up one more level without making any progress. The only way to stop this endless buck-passing would be either to abandon perspectivalism at some point (in which case why not abandon it right away?), or to ground consciousness in the way that a brain state appears to a subject that isn't reducible to a brain state (in which case we’ve embraced non-materialist perspectivalism).

What all of this means is that if one accepts the problem with identificationism, then perspectivalism won’t get us any closer to an adequate materialist account of consciousness.

But before leaving perspectivalism altogether, I want to briefly consider Douglas Hofstadter’s “strange loop.” Here, I must confess that I haven’t had the time to study Hofstadter closely, so there may be something important I’m missing. If so, please pipe in with appropriate comments. In any event, as I understand it the basic thrust of Hofstadter’s idea is that the brain is designed to make representations of objects via various neurological mechanisms—but that one of the things it makes representations of is itself. But in representing itself, it includes in that representation the other representations—including its self-representation. The result is a kind of feedback loop out of which consciousness emerges—like the noise produced when a microphone is held up to the speaker to which it is connected, or the strange visual images that result when a video camera is focused narrowly on a monitor that is displaying a live feed of what the camera is filming.

Now I must confess to finding something really cool and wondrous about these kinds of feedback loops. As I child I often wondered what would happen if one put two mirrors up against one another and somehow managed to get enough light in there to make visual reflection possible—without introducing the lamp itself or any other object that might be reflected. I had fantasies that this would open up a window into some parallel dimension—if only one could get in there to see it (which, of course, couldn’t be done without introducing an object that would then shatter the magic).

The closest I got to achieving this fantasy was to enter the mirror room that’s on display at the Albright Knox Art Gallery in Buffalo, NY (where I grew up). The interior of the room is entirely mirrored—walls, floor, ceiling—so that when one stands in it one sees endless corridors in all directions, with oneself endlessly repeated. There’s also a table and chair made of mirrors, but I always found that element distracting. In any event, I remember thinking that if only I could make myself invisible (and get rid of the mirror table and chair) I’d be able to produce the conditions of infinite-reflection-of-nothing that would open a window to another world.

But as appealing as this infinite feedback loop idea is, I don’t see how it can generate anything really new. Without any light to reflect, two mirrors pressed against each other won’t produce any kind of mutual feedback. A speaker that produces no sounds of its own won’t cause feedback when the microphone is brought close to it unless there are ambient noises in the room to be magnified through the feedback. Likewise for the monitor-and-video loop. These loops need something to work with—and while they can produce a kind of infinite magnification of what is given to them, can something new really arise from them? I don’t see how.

Now, if we accept Chalmers’ view that there are latent “consciousness” properties in matter, I can imagine how a feedback loop of the sort Hofstadter describes might “magnify” these properties—producing discernible consciousness in the brain out of indiscernible “traces.” But in the absence of this Chalmerian assumption, I don’t see how an infinite feedback loop can give us anything new. And so, unless the explanatory gap I talked about in the previous post can be closed in some other way, I don’t see how the mere introduction of a feedback loop can close it. Such a feedback loop might explain many things about the operation of the brain and identity-formation—but it seems that it can explain aspects of our conscious experience (such as our sense of self) only on the assumption that we already have conscious experience.

In short, as an account of the origins of consciousness, I don’t see how the feedback loop can be explanatorily significant. While I think that our brains do engage in self-monitoring activity—and while I think that this self-monitoring activity does generate feedback loops that are going to have interesting results that may explain various features or aspects of our conscious life—I don’t see how they can account for consciousness itself.

Sunday, August 15, 2010

Materialist Conceptions of Mind, Part II: Emergentism and the Explanatory Gap

In my last post I explained why I found a simple identification between mental phenomena and brain processes to be untenable. But such identification of mind and brain is not the only option for the materialist—and so even if my case against identificationism is sound, it doesn’t follow that one must give up on materialism. One alternative to simple identification is emergentism—the idea that consciousness is an emergent property of the brain.


This view holds (in the words of John Searle, the most prominent defender of emergentism) “that brain processes cause consciousness but that consciousness is itself a feature of the brain”—a feature that emerges because of something about the distinctive structure and organization and activity of the brain. It is this alternative I want to turn to now.

To get at the idea of what emergentism maintains, it may help to start with an example. On my desk, I have a framed picture of my kids—taken just over four years ago when my daughter was an infant. My son is leaning towards her, laughing while he looks at her. He’s wearing an armadillo t-shirt and is holding a green plastic cup. My daughter is wearing a red bodysuit and looks like she is boxing the air with her little fists.

This same picture is one I've uploaded onto my computer, and it is now part of my slideshow screensaver. When this picture appears on my computer screen, I can rightly say, “That is the same picture as the one in the frame on my desk.” And I can rightly say that because what I am referring to as the picture is something abstracted from (in one case) pixels and underlying hardware controlled by computer programming and (in the other case) ink distributed on photo paper. What I mean by "the picture" is this something that can emerge from each underlying physical substrate—something that both of these very different physical substrates have in common.

And what is it that they have in common? It has something to do with organization. In the case of the framed photo, dots of ink in different colors are arranged on paper in a pattern that produces a holistic visual effect on the viewer. The other image creates an arrangement of illuminated pixels rather than dots of ink, but the pattern into which those pixels are organized is the same. And so we have these two very different physical substrates that each succeed in generating identical images (that is, identical in kind; they are not numerically the same thing).

In each case, the picture of my children is an emergent property of the underlying physical substrate—it is a feature of the physical system that is produced by that system as a whole by virtue of what is true of the parts, but which is not a feature of any of the parts.

Arguably, the emergent property in either case doesn’t really “emerge” in the absence of an observer who has the capacity to find in the similar organizational structures of the two physical substrates a shared meaning. In other words, at least in some cases, emergence requires a subject who is capable of meaning-attributions. Without that observer, we have an arrangement of inkblots or of illuminated pixels, but we simply don’t have an image of my children. That requires someone to find meaning in the pattern—and neither a piece of photo paper with ink on it nor a computer can do that. This point, although not one I will pursue at the moment, may prove to be of great significance for thinking clearly about the emergence of consciousness.

Now, it is not necessary for emergence that there be multiple and distinct physical substrates that are somehow capable of giving rise to the same kind of property. I choose this example for two reasons. First, the fact that there are two very different physical substrates helps to isolate the emergent property and identify it as distinguishable from the substrate which produces it. Second and more importantly, this example is useful for highlighting some of the advantages of emergentism over identificationism with respect to consciousness.

The most obvious advantage, highlighted by this example, is that emergentism makes room for multiple realizations of consciousness. That is, different sorts of physical systems—human brains and, say, “positronic” ones—can both give rise to this thing we call consciousness.

Second, the example makes clear that an emergent property is not to be identified with the underlying physical system that causes it, even if it is causally explained or accounted for in each particular case by the more basic properties and structural features of that system. And because of this fact, one does not run into the sorts of problems that identificationism poses with respect to relational properties.

So, to go back to the picture of my children, I am intimately familiar with this image even though, in the case of its instantiation on my computer, I know very little about the underlying mechanisms which produce it. Since the image on the computer screen is caused by but distinct from the physical substrate that produces it, there is no problem that arises from this difference in relational properties. Put simply, one can be perfectly familiar with one property of a thing without being at all familiar with other properties. And so, if consciousness is an emergent property of brain processes, the fact that I am not familiar with any of the other properties of the underlying brain processes poses no difficulty at all. Likewise, that scientific investigation of a bat’s brain can’t tell us what it’s like to be a bat doesn’t cause the same degree of trouble, because a mode of inquiry that can describe one range of properties possessed by a thing might not be able to tell us everything there is to know about the thing. Some properties might be inaccessible to that mode of inquiry.

Now before turning to the challenges faced by emergentism, let me say a few words about what emergentism claims—and what it doesn’t claim—about consciousness. Here, I want to specifically stress that Searle, although the philosopher most commonly associated with emergentism, rejects some very important materialist theories of mind which are emergentist (at least according to the account of emergence offered above).

To be precise, Searle is a staunch opponent of the kind of functionalist account of mind that, for decades, was almost normative in cognitive science research. Functionalism, in its broadest terms, identifies mental states with functional ones—where a functional state is one that is possessed by a physical system when that physical system responds to a certain range of inputs with a predictable range of outputs. A vending machine has a functional state insofar as, when you insert the right amount of money and push the right button, a can of coke will tumble out. It has a “functional organization” or “pattern of causal relations” (to borrow Searle’s description).

The most interesting functional states, from a cognitive science standpoint, are those that computers possess by virtue of their programming. A computer program is, basically, the imposition of a specific functional organization onto a computer’s hardware. When a particular program is running (Microsoft Word), then various inputs (keys punched on the keyboard) reliably produces certain outputs (letters appearing consecutively from left to right across the screen). Of course, different programmers can generate similar functional states in different ways, and can do so on different hardware. So the same functional state might be produced on a PC with Microsoft Word, or on a Mac with WordPerfect.

The most popular developed form of functionalism is the theory that mental states are akin to computer programs—that is, mental states just ARE the functional organization or software of the brain.

Searle calls this view “Strong AI,” and he has attacked it again and again—most famously with his “Chinese Room” thought experiment. The thought experiment asks us to imagine someone who is isolated in a room and has Chinese characters given to him from the outside. He then consults some rather complex instructions that tell him what to do when he receives such-and-such characters. Following these instructions, he puts together a set of characters and hands them out of the room. It turns out that what he is receiving are questions asked in Chinese, and what he returns are answers. The point is that no matter how sophisticated the instructions for what symbolic outputs to provide in response to which symbolic inputs, the man in the room cannot be said to understand Chinese—because the instructions (the “program”) merely indicate how to correctly manipulate the symbols. They don’t say what the symbols mean. Put another way, a program can offer syntax but not semantics. But consciousness has semantic content—in fact, that’s what qualia are. And so, any system that can’t explain such content can’t explain consciousness.

But here’s the thing: the functional state of a system IS an emergent property of that system—it’s a property that emerges out of how the whole is organized. What Searle’s Chinese Room analogy demonstrates is that it isn’t enough to say that consciousness is an emergent property of brain processes. We need to ask what kind of emergent property it is and how it emerges—and this account has to track onto what we know first-hand about consciousness.

And although Searle is convinced that consciousness IS an emergent property, he has not offered any such account. That’s not his aim, because he doesn’t think he is in a position to do so. Rather, his aim is to spark a research program. He thinks cognitive scientists have been barking up the wrong tree—that their working model for understanding what consciousness IS just doesn’t work, and that as a result their attempts to explain consciousness are really explaining something else entirely (our ability to perform complex calculations, perhaps).

So, to summarize: the emergentist thinks that something about neurological systems—their constitutive elements, their organization, the interactions of the parts—gives rise to or produces on a holistic level this thing we call consciousness. But while one emergent property—the functional organization of the brain—can explain the brain’s capacity to respond to certain inputs (a hot stove scalding a hand) with appropriate outputs (the hand jerking quickly away), or its capacity to perform complex calculations, the functional organization alone is insufficient to account for the content of consciousness.

The problem, of course, is that neuroscientists do not at present have any account of how neurological systems can do this—a fact that most are willing to admit. Sandra Menssen and Thomas Sullivan, in The Agnostic Inquirer, offer some choice quotes from current neuroscience texts that are rather striking in this regard. For example, one of the standard textbooks in the field, Cognitive Neuroscience: The Biology of the Mind, puts it this way: “Right from the start we can say that science has little to say about sentience. We are clueless on how the brain creates sentience.”

Neuroscientists have had considerable success in tracking correlations between neurological events and conscious states—and then in describing the correlated neurological events in growing detail. They can do this, first of all, because their subjects can communicate their conscious states to researchers. Scientists can ask their subjects what they are feeling, sensing, etc., as those subjects’ brains are being probed using MRI technology or other exploratory equipment. To a lesser extent they can also track correlations because they can reasonably posit that their subjects are undergoing certain conscious states based on their own subjective experience of consciousness (they can assume that their research subject is having a certain kind of subjective experience because they’ve just flashed a bright light in the subject’s eyes and because the researchers know what their own subjective experience is when that happens to them).

But although they have been able to track correlations between brain states and conscious states in this way, we might well ask whether they could have made any progress at all in this project in the absence of either subjective reports from their subjects or conclusions based on attributing to their subjects what they find in their own consciousness (through introspection). The answer seems to be no. And the reason is because there is nothing about the MRI images or other data that by itself gives any clue as to what the corresponding contents of consciousness should be. There is this gulf between what neuroscientists are looking at and describing (the brain processes) and the correlated conscious states with which we are all familiar.

Could this explanatory gap be one that more scientific study will eventually close? Will we, eventually, be able to understand how neurological events can generate this thing we call consciousness? Many scientists express this hope, and many naturalists rest their ontology on it. They say, in effect, “Scientists have explained many mysteries that previously had been thought to be inexplicable in scientific terms. Just give them time, and they’ll explain consciousness, too.” Searle clearly has this hope—but he thinks the hope can be realized only once scientists aren’t being misdirected by philosophical “accounts” of consciousness that really deny the existence of the data to be explained.

But others think that there is a difference in kind between the mysteries that science has unraveled in the past and the present mystery of consciousness—a difference that makes this explanatory gap more than merely contingent. In effect, the view is this: the nature of neurological systems is such that a scientific understanding of them, no matter how complete, cannot account for consciousness.

The argument for this view traces back at least to Leibniz, who offers the following brief argument in The Monadology:

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.
In short, Leibniz thinks our physical organs which might be thought responsible for consciousness are mechanical systems—but there is nothing about mechanistic explanation which is in principle capable of accounting for our inner perceptual experience. We find a similar argument advanced in more detail by the great 19th Century German Philosopher, Hermann Lotze:

…out of all combinations of material conditions the origin of a spiritual condition of the soul never becomes analytically conceivable; or, more simply expressed, if we think of material elements in such a way as to predicate of them nothing which does not belong to the notion of matter, if we simply conceive of them as entities in space which are moveable and may call each other into motion by their power; if we, finally, imagine these motions of one or many elements as varied or combined as we please, there never comes a time when it is self-evident that the motions last produced may not longer remain motions but must be transformed into sensations. A materialism, therefore, which assumed that a spiritual life could spring out of simply physical conditions or motions of bodily atoms would be an empty assumption, and, in this form, has hardly ever been advocated in earnest.
Of course, Lotze was writing before this position was widely and persistently advocated in earnest by a range of thinkers in the 20th Century—but his argument has continued to crop up, most recently (in an implicit form) in Chalmer’s zombie thought experiment—the point of which seems to be that there is nothing about an arrangement of physical bodies “pushing on each other,” no matter how complex the system of pushes, that implies consciousness. It is for this reason, I think, that Chalmers is convinced we can always imagine such a system existing but lacking consciousness (a “zombie”). Since nothing about the physical system, if it possesses only physical properties, implies consciousness, it is possible for such a physical system to exist without consciousness.

Chalmers’ solution is one that Lotze was well aware of more than a century before Chalmers proposed it with much fanfare. In fact, here is what Lotze says immediately after his rejection of a simple mechanistic account of consciousness:

The materialistic views which have really had adherents have proceeded from the premise that what we call matter is really better than it externally appears. It contains in itself the fundamental peculiarity out of which the spiritual conditions may develop just as well as physical predicates—extension, impenetrability, etc.—are developed out of another fundamental peculiarity. From this results the new attempt, out of the reciprocal operations of these psychical elementary forces to elucidate all the elements of the spiritual life just as its bodily life is derived from the reciprocation of the physical elementary forces of its constituents.
Lotze goes on to challenge this “Chalmerian” view on the grounds that it cannot account for the unity of consciousness—but let me leave this aside for now. The point that Lotze wants to make—a point echoed by Chalmers more than a century later—is that there is nothing about purely mechanistic explanation that renders consciousness “analytically conceivable” in terms of it.

Menssen and Sullivan offer their own analogy for getting at this explanatory disconnect. Here is how they put the point:

Your child has a pull toy that features a little man in a box whose head pops in and out of the box as the toy is pulled along. You wonder, why does the head pop in and out? You examine the toy and see that the wheels are affixed to an axle with a rise in the middle; the little man sits on the rise, so his head goes up and down with each revolution of the wheels. Now your friend comes in and asks, ‘Why does the man’s head pop in and out?’ So you explain. And your friend says, ‘I understand all that, but why does the head pop in and out when the toy is pulled along?’ The question is bizarre: if your friend really understood everything you have said, it makes no sense to continue to ask why the head pops in and out.
This “making no sense to keep asking why once the explanation is understood” is what Lotze has in mind when he speaks of a phenomenon being “analytically conceivable” in relation to a particular kind of explanation—the explanation just shows us how the phenomenon in question is brought about. And this, Menssen and Sullivan maintain, is a feature of any genuine causal explanation. In their terms, “If a putative explanation of a phenomenon is a genuine causal explanation, then if you grasp the explanation in relation to the phenomenon, it cannot reasonably be asked: ‘But why does the phenomenon occur?’”

They follow their articulation of this principle with the following crucial claim: “No matter how much is said about the nervous system, as long as what is said is confined to statements of fundamental physics and chemistry, you will always be able to ask ‘But why does that produce consciousness?’”

The contention here is that not only do current mechanistic explanations fall short of accounting for consciousness, but that “more of the same” sort of explanation won’t close the gap—because the problem lies with the kind of explanation being offered, rather than with the amount of detail involved.

To see this point, consider an analogy my friend and colleague John Kronen likes to employ (one that dates him—and me, since I am able to appreciate it). Suppose someone comes upon Samantha Stevens wiggling her nose to miraculous effect. She wiggles, and the vacuum flies out of the closet and cleans the house all by itself. She wiggles, and her poor husband Darren materializes in the living room, blinking in surprise. Suppose someone came along and said, “Oh, I see! No mystery here. These events are explained by the wiggling of her nose.” Well, we wouldn’t be satisfied.

Now suppose that the person took to studying Samantha’s nose-wiggles and began to observe and record correlations between how she wiggles her nose and what happens. A long wiggle to the left followed by two short ones to the right precede every instance of inanimate objects moving on their own; two short left wiggles followed by two short right wiggles precede every instances of teleportation, etc. Would we now be inclined to say, “Oh, now I get it!”? Of course not. And now matter how detailed the study of the patterns of nose movements—no matter how perfect the correspondence between distinctive sorts of nose wiggles and distinctive events—we would be no closer to having an explanation of how Samantha Stevens does what she does. Nose wiggles are analytically disconnected from flying objects and teleportation, such that they have no capacity to close the explanatory gap.

The claim, in effect, is that physical brain events bear the same relation to consciousness. They are analytically disconnected in such a way that it is not possible to close the explanatory gap.

Of course, it is one thing to say this, another for it to be true. But here is the problem. If someone were to ask why Samantha’s nose-wiggles are analytically disconnected from flying objects so as to be incapable by themselves of providing an adequate explanation of the latter, I would be at pains to offer anything other than, “Well, think about her nose wiggles. Think about flying objects. They have nothing to do with each other.” The sense of disconnect here is so intuitively evident that, in the absence of some astonishingly unexpected explanation that succeeds in establishing a connection, one is justified in assuming that “more of the same” won’t narrow the explanatory gap. We need to look past her nose and introduce some further element that can make the connection.

But, of course, defenders of materialist conceptions of consciousness think brains and minds have everything to do with each other—and so it may well be the case that what we have here is (once again) a basic dichotomy of intuitions. Those who find the explanatory gap argument persuasive have an intuitive and immediate sense of the distinctness of consciousness and mechanistic processes—and this intuitive sense entails that in the absence of a causal explanation that succeeds in closing the explanatory gap, the presumptive standpoint will be that the gap can’t be closed by that kind of explanation.

This is where I am positioned. And because I am positioned as I am, no materialist account of consciousness will be convincing in the absence of an explanation that actually closes the explanatory gap. But for those with different basic intuitions, the situation may be very different.

So what does all of this mean? Do I think that scientists should stop trying to explain consciousness in terms of the brain? No. But it does mean that unless and until they succeed, those like myself—those who see a disparity between brain processes and conscious states as enormous (more enormous, actually) as that between nose-wiggles and self-propelled vaccuums—won’t believe it until we see it. For us, given where we stand intuitively, the burden of proof rests on the materialist to show that the explanatory gap can be closed by nothing other than a deeper study of the brain.

In the meantime, we’ll conduct our own inquiries—looking for something more, some additional element, that can bridge the gulf between mechanistic explanations and the phenomenon of consciousness, and so explain the correlations that scientists have shown to exist between the two.

That different people, with different basic intuitions, are pursuing different theories and attempting to find ways to substantiate them (especially to those who stand on the other side of the intuitive gap) seems to me as if it can only be a good thing--although there are, of course, plenty of people on both sides of the divide who think that what those on the other side are attempting is absurd and pointless and worthy of nothing but mockery.

Thursday, August 12, 2010

Materialist Conceptions of Mind, Part 1: Identificationism

In the next few posts I want to (a) consider what I take to be the main ways that a materialist can account for consciousness, and (b) try to identify the main hurdles that these accounts must overcome in order to be generally convincing.


For my purposes here a “materialist” will refer to someone who holds that phenomena in general and mental phenomena in particular (that is, conscious states and subjective experiences such as thoughts, sensations, emotions, and desires) are (i) purely material or physical, and (ii) wholly and adequately explained by the sorts of physical mechanisms and processes that, in the sciences, are routinely posited as explanations for empirical phenomena (even if those explanations might not be accessible to us right now). By the “material” or “physical” I mean, at the most basic level, entities characterizable in spatio-temporal terms that interact and affect one another in accord with causal laws whose consistent operation can be tested through scientific means—in short, matter and energy.

So, (i) is an ontological thesis about what mental phenomena are. To say that they are purely physical is not, however, to say that they must be entities composed of matter an energy, like tomatoes and rocks. While that is one way in which something can be purely physical, there are other ways. Mental phenomena might, for example, be physical process, or a certain class of properties possessed by these processes, or the way that certain physical processes “look” from a peculiar perspective.

The related thesis, (ii), is more of an epistemological one about what methods we should employ for coming to an adequate understanding of mental phenomena. The methods are scientific. That is, we will come to adequately understand mental phenomena in just the way that we have come to understand such phenomena as illness, lightning, or the cycle of the seasons: by uncovering the physical building blocks, structures, and mechanisms that (in accord with physical laws) explain the phenomena in question.

The materialist, thus understood, has (I think) three main options for dealing with consciousness. These options are:

1) “Identificationism,” by which I mean a direct identification of mental phenomena with brain processes—what we call mental phenomena just are brain events in much the way that water is H20. (Note: while some equate this view with what is called “eliminative materialism,” I reserve “eliminative materialism” for the more radical view, seemingly embraced by Feyerabend and a few others, that mental phenomena don’t really exist at all).

2) “Emergentism,” by which I mean the view (most famously advocated by Searle) that consciousness is an emergent property of an active brain. What we know as our consciousness is a feature of the brain in something like the way that brittleness is a feature of glass—it is a property of the whole that arises by virtue of the activity of the constituent elements organized in the distinctive way that they are, and can thus be adequately explained by reference to those elements.

3) “Perspectivalism,” by which I mean the view that consciousness is the way that the complex processes of the brain “look” from a distinctive perspective—specifically, from the “internal” perspective that arises when the brain itself engages in self-monitoring/self-representing activity. This is the kind of view advocated by Douglas Hofstadter in I Am a Strange Loop.

In this first post, I would like to look at the first of these three options to explain why materialists would do well to look elsewhere. The view gets what plausibility it has from the growing body of neuroscientific research that has succeeded in correlating specific “inner” experiences (the sensation of bright light, pain, etc.) with (partially understood/described) physical processes in the brain. The idea here is to simply identify the former with the latter—to say, in effect, that what neuroscientists are slowly describing just is the conscious experience with which it has been correlated.

There are, in effect, two substantial difficulties with making this kind of direct identification. The first has to do with the failure of brain states and mental states to meet ordinary standards of identification. The second has to do with the prospects for “multiple realization” of mental phenomena.

Let me begin with the ordinary standards of identification. In any usual understanding of identity, in order for A and B to be identical, what is true of A must be true of B and vice versa. If A has properties that B lacks (or vice versa), then the relation between A and B must be more complex than one of simple identity (perhaps A names X-as-experienced-from-perspective-1 while B names X-as-experienced-from-perspective-2).

But mental phenomena—thoughts, sensations, desires, emotions, etc.—clearly have properties that the corresponding brain processes lack, and vice versa. To see this better, it is helpful to introduce the notion of “qualia” (“quale” in the singular), by which is meant the subjective contents of experience, or, perhaps, the “way it feels” or “what it’s like” to undergo this or that mental experience (for example, what the peculiarly bitter sunflower seed I just ate tasted like as I chewed it up). This “what-it’s-like-ness” of mental phenomena isn’t some incidental feature of these phenomena, but rather their chief constitutive element. Put another way, what I mean by the conscious experience of pain just is the associated quale.

And, as Thomas Nagel and Frank Jackson have both famously argued in different ways, these qualia are inaccessible to scientific study in a way that the details of brain processes are not. Nagel, in “What Is It Like to Be a Bat?”, points out that we know a great deal about the neurophysiology of bats, and we know a great deal about how their acoustic radar works. But none of this tells us what it’s like to experience the world through such a radar in the way a bat does. The qualia of a bat’s inner life seem to be inaccessible to science in a way that the brain processes of the bat are not.

Jackson offers a different thought experiment, in terms of a hypothetical scientist who knows everything there is to know, scientifically, about the physiology of color perception, including an exhaustive understanding of the brain processes that correspond with the experience of color…but she has never herself experienced color. And then, one day, she sees a red rose for the first time. Jackson asks: has she learned something new? He thinks the answer is obviously “yes,” which means that there is something about the experience of color that is “above and beyond” the neurological processes science can study.

Here’s a way to put their point: brain processes are third-party accessible objects of study. But there is something related to these brain processes that is inaccessible to such third party study, being instead only available through first-person introspection. This something else is what has come to be called the “qualia” of consciousness. But the fact is that we might as well say that it is consciousness itself—because what we are referring to when we speak of our conscious life just ARE these qualia.

These facts make a simple identification between mental phenomena and brain states at best highly problematic. Mental phenomena are something that each of us has immediate first-person familiarity with; but these phenomena seem inaccessible to neuroscientists engaged in third-person investigation of our brains. And we can claim familiarity with what this or that conscious experience is like—and so honestly claim familiarity with mental states—even if we know absolutely nothing about the brain states with which they correspond. As such, it seems as if mental states have relational properties which brain states lack, and vice versa.

Some defenders of identificationism try to escape this conclusion by pointing to the fact that we can identify water with H2O even though, for a long time, people were entirely unfamiliar with the chemical composition of water (and many remain so). Doesn’t that show that someone’s being familiar/acquainted with A but not with B doesn’t preclude the identity of A and B?

The problem with this rejoinder is that it fails to make a crucial distinction. “Water” denotes a certain entity out there in the world—one we are acquainted with primarily through a range of properties such as liquidity, clarity, capacity to quench thirst, etc. “H2O” denotes the same entity out there in the world understood from a scientific perspective in terms of chemical composition. It turns out that the entity characterized by the former set of properties is the same one that has the given chemical composition.

That is, water and H2O are identical because they refer to the same thing. What is referenced by "water" is not a set of properties but a certain kind of thing (a "natural kind," some might say) that exists in the world but which we are acquainted with through the given set of properties. What we are referring to when we use "water" is the underlying thing which has the properties. But when we speak of pain, we are referring to the quale itself--the "what it feels like" of pain, as opposed to any underlying physical mechanism which might or might not be able to explain the emergence of this sensation.

To better see the problem here, let's consider a different example: the statement "Clark Kent is Superman." Now, consider two possible ways to look at this statement, related to two distinct ways of understanding “Clark Kent” and “Superman.” In the first case, the statement comes out true while in the second it comes out false (at least if we set aside for the moment the unique challenges associated with fictional individuals).

Consider Case 1. In this case, by “Superman” we mean “the individual who embodies a range of traits, including super strength, ability to fly, a do-gooder personality, etc.”; and by “Clark Kent” we mean “the individual who embodies a range of traits, including being mild-mannered, dorky, a reporter for the Daily Planet, etc.”. In this case the identification holds because the individual who possesses the former traits is the same individual who possesses the latter. It isn’t that Superman and Clark Kent possess different properties not possessed by the other. Rather, it is that we know the individual under one name in terms of some of that individual’s properties, and we know the individual under the other name in terms of a different set of that same individual’s properties. But given that the two names refer to the same individual, every property possessed by Clark Kent is also a property possessed by Superman, and vice versa. Hence, a simple identification is possible.

Now consider Case 2. In this case, by “Clark Kent” we mean “the identity taken on by Kryptonian Kal-El in which he masquerades as a normal Earthling”; and by “Superman” we mean “the identity taken on by Kal-El in which he exercises his unique powers in public view.” In other words, under this usage each term refers, not to an individual person, but to an “identity” or “persona” that an individual person has adopted. Well, given THOSE meanings of the respective terms, it would be a mistake to say that Clark Kent is Superman. After all, Kal-El’s “secret identity” is very different from his “superhero identity”, and so they cannot be the same identity—even if it is the same individual who adopts them both.

But given the way in which we use the language of mental states and the way that we use brain-state terms, the two are related in something more like Case 2 than Case 1. And this means that a simple identification of the kind “Clark Kent is Superman” is false, even if it might be true that there is a common underlying entity which is the bearer of both the “mental state” and “brain state” identities. This means that the materialist is better served by the materialist version of what has come to be called “property dualism”: the view that brain states have “mental properties” that are distinct from the physical properties studied by scientists. And while there are different ways to explicate this sort of property dualism, the one that strikes me as being the most plausible is emergentism—the idea (energetically defended by John Searle) that consciousness is an emergent property that supervenes on an active neurological system like the brain taken as a whole.

More quickly and briefly, let me stress one additional difficulty with identificationism. The problem lies in the fact that, by all appearances, on the assumption of materialism we should suppose consciousness admits of “multiple realizations.” The idea of multiple realizations is this: the same property or process could be produced by very different physical building blocks and arrangements. If several engineering firms are given the same task—to design a device that dispenses Starbucks-quality lattes in response to the voice command, “Overpriced coffee! Now!”—they might realize their objective in very different ways. The materials might be different. How those materials are physically arranged might be different. The internal processes of the resultant machine might be quite different in many ways—and yet, both machines might have the very same “emergent” property of dispensing Starbucks-quality lattes in response to appropriate voice commands. This property thus cannot be identical with any particular set of physical components organized and operating in a particular way, because the same property can supervene on very different physical building-blocks.

Now I suppose it is possible that only organic materials exactly like those in terrestrial brains, organized in only the ways that more highly developed terrestrial brains are organized, and operating only in terms of those brains’ distinctive “programming parameters,” can give rise to what we know as consciousness. But this seems implausible to me. Maybe that’s just because I watched one too many episodes of Star Trek--but I think it's more than that. Among other things, rejecting multiple realizations makes the emergence of consciousness through natural selection seem to be an even bigger miracle than it already is (something I doubt a materialist would find congenial).

So, I’m inclined to think that if consciousness is the kind of thing that can emerge from physical systems at all, it will be the kind of thing that will admit of multiple realizations. If so, then one kind of mental phenomenon (say, the experience of pain) cannot be identified with the kind of neurological activity it happens to be correlated with in human brains. And so, if you are a materialist, you will need a different account than identificationism. And since the contender which seems to arise most naturally out of the ashes of identificationism is emergentism, this is what I will take up in my next post.

Tuesday, August 10, 2010

New Review of IS GOD A DELUSION?

The Lutheran Quarterly, a theological journal specializing in Lutheran theology, just published a very nice review of my book. Unfortunately, it isn't available online and I'm not exactly keen to type it in verbatim (and wonder about the copyright issues of doing so in any event). So I will content myself with some highlights:

"This carefully written and highly readable book by philosopher Eric Reitan engages the current heirs of David Hume and Bertrand Russell...Reitan does so from the perspective of liberal Protestantism, especially as mediated through the lens of the theologian of religious experience, Schleiermacher....

"While the 'new atheists' are his direct target, Reitan's indirect target would be fundamentalists. In a sense, Reitan, with many other apologists, particularly within the Liberal Protestant tradition, see (sic) the fight between atheists and fundamentalists as a whirlpool in which we need not drown....

"On a final note, a major reason that Reitan sees Christianity as plausible is because its primary narrative of a creating, sustaining, and redeeming God is 'beautiful' (185). This book is to be appreciated for its readability, a trait not readily found in many philosophers. Whether or not you like this book will depend on what you think about the liberal Protestant tradition."

The review also devotes a fair bit of attention to my use of Luther's conception of faith at a critical juncture of the book's argument (not suprising given the journal's focus); and--interestingly--the reviewer spends even more time on what he describes as "(o)ne of Reitan's most interesting arguments":  my sustained use of the "brain in a vat" analogy (although I'm not entirely convinced that the reviewer's summary of that argument adequately captures the main thrust of what I was trying to do).

Overall, a good review--and the last sentence pretty well captures my experience of how the book has been received. While I wouldn't say that only liberal Protestants like the book, I would say that those with an affinity for some of the core themes in that tradition tend to be its biggest fans.