Wednesday, May 31, 2006

Why Does "Believe" Have No Present Progressive?

In English, the distinction between dispositional and occurrent uses of verbs is often marked in the present tense, dispositional uses taking the simple present and occurrent uses taking the present progressive. For example: "Corina runs" usually suggests that Corina has the tendency, or disposition, to run, though she may not be running now; while "Corina is running" suggests that running is going on at the very moment of the utterance. "Jamie reads the Bible" suggests that Jamie has the habit of reading the Bible from time to time; while "Jamie is reading the Bible" suggests that Jamie is presently (whether in a narrow or broad sense of the present) reading some bit of, or in the course of reading through, the Bible.

Most philosophers of mind accept a distinction between dispositional vs. occurrent senses of belief as well. Five minutes ago, before the thought crossed your mind, you dispositionally believed that Jupiter is a planet. Now that you're thinking about it, you occurrently believe that fact. The idea is that we can talk about a person's beliefs dispositionally, without knowing what is presently running through her mind (she might even be in a dreamless sleep), but that beliefs also sometimes come up front, as it were, to play a role in active inference or conscious reasoning, in some more occurrent sense.

It's interesting, then, that ordinary English usage has no (or at least very little) use for the present progressive form of "believes", which we might think would be the natural way to talk about occurrent belief as it occurs. We don't say "Harry is believing that New York City is large." Indeed, my version of MS Word marks that sentence as ungrammatical -- though it has no problem with "Harry is saying that New York City is large"! Likewise, if you search for "is believing" in Google, you find instance after instance of "seeing is believing". If you exclude pages with "seeing", you'll find "hearing is believing", "stealing is believing", and the like, but not a present progressive in sight!

Philosophers of mind sometimes point out that in ordinary English we often use "thinks" to ascribe beliefs: "Joan thinks plaid ties are chic". But here again, English steers us away from occurrent belief: The present progressive of "thinks" -- "is thinking" -- generally does not ascribe an occurrent belief: "I am thinking of Paris", "Jee Loo is thinking about philosophy". Perhaps closest to an ascription of occurrent belief, with the present progressive "is thinking" in natural English, would be something like this: "I've been thinking that maybe we should be leaving soon". But even that last seems not so much to ascribe the belief that maybe we should be leaving soon as the thought that we should be.

I'm not a huge fan of the examination of ordinary language to reveal truths about the mind -- at least in the way philosophers have often done it. But in this case, I wonder if English usage isn't onto something. I wonder whether, maybe, there's enough of a difference between occurrent mental states, like thoughts and judgments, and dispositional ones like beliefs, that we shouldn't simply assimilate the former into the latter in the guise of "occurrent belief".

(If so, this would fit nicely with my sense -- as explored in my earlier racism and God & Heaven posts -- that we often sincerely judge or assert things we don't fully and genuinely believe.)

Monday, May 29, 2006

Happy Lynchers

To render the photos below less viscerally disturbing, I've blanked out sections. They remain, I think, ethically quite disturbing.





The blanked out parts of the pictures are, of course, the victims of lynchings (all African-American) in early 20th-century United States. I won't risk the sensibilities of readers any more than I already have by describing the details of the corpses, but to put it blandly, in the first and third pictures especially, they are grotesquely mutilated.

I post these pictures not (I hope) from any motive of voyeurism, but to share with you my sense that they powerfully raise one of the most important issues in moral psychology: the emotions of perpetrators of evil. Though it's a bit hard to see in these small pictures (the maximum size Blogger allows), I hope it's nonetheless evident that most of the lynchers look relaxed and happy -- though they're only feet from a freshly murdered corpse. It was not uncommon to bring small children along to lynchings, to collect souvenirs, to take photos and sell them as postcards. (These pictures are from a collection of just such postcards: James Allen's Without Sanctuary.)

Although I'm attracted to a roughly Mencian view of human nature, according to which something in us is deeply revolted by evil, when that evil is nearby and "in one's face" as it were, I find pictures like this somewhat difficult to reconcile with that view. Are these people inwardly revolted, under their smiles?

Friday, May 26, 2006

Images as "Pictures"

In Wednesday's post, I wrote about the picture analogy for vision and the implicit (and possibly erroneous) assumptions about visual experience invites. Well, the picture analogy for visual imagery, visual imaginings, is even more pervasive. It almost doesn't feel like a metaphor to call an image a "picture" in the mind or to say "picture to yourself".

This struck me with particular force as I was reading through English translations of ancient Greek texts, looking for analogies between visual experience and paintings or pictures (I found exactly one in the whole corpus, though I can't pretend to have given it exhaustive coverage), pursuing thoughts relevant to the issues in Wednesday's post. I kept finding reference to "picturing" in the English translations, in discussions of imagery -- but, oddly, different translations would use that word in different places. Going back to the original Greek, it was evident in nearly every case I examined that it was the translator bringing in the metaphor (possibly not even aware of its metaphorical status or its potential interest in understanding the history of conceptions of imagery).

The analogy between images and pictures has worked its way so deeply into our thoughts about imagery that it's almost invisible as an analogy.

Now in some sense, surely, images are like pictures and the analogy between them is a good one. But let me suggest some ways in which images might not be like pictures (might not -- I don't think these issues have been adequately explored yet).

(1.) Images might be three-dimensional in some robust sense, while pictures are flat. Perhaps even (as some people have reported and as Borges describes in his story "The Zahir") images can be experienced simultaneously from multiple angles?

(2.) Images might be indeterminate in a way it's difficult for pictures to be. For example, it might be indeterminate whether the man you're imagining is wearing a hat, or what color his jacket is, or whether he has a beard.

(3.) Images might have their interpretations built into them in a way that pictures do not. For example, when we imagine an "ambiguous figure" like Wittgenstein's duck-rabbit or the Necker cube, the image might incorporate or involve one interpretation then another, while nothing in the "picture" before our mind changes. (This thought arose recently in conversation with Charles Siewert, as we were reading a paper by William Robinson that seemed to assume that there's nothing different in the visual imagery experience of a Necker cube interpreted one way and that same Necker cube interpreted another.)

As I mentioned in Wednesday's post, I'm inclined to think our metaphors for the mind often distort our understanding of it. So I wonder: Would people be more likely to accept features 1-3 as aspects of our imagery experience if the picture analogy weren't so deeply ingrained in our thinking?

Wednesday, May 24, 2006

Do Tilted Coins Look Elliptical? (Part Two)

Does a tilted coin, in some sense, "look elliptical"? Do farther-away streetlights in some sense look smaller than nearer ones? Monday, I raised some geometrical concerns for the apparently commonsensical view that they do.

Now perhaps it's just introspectively obvious and undeniable that tilted coins look elliptical, geometrical cavils aside? I've put a coin on my desk and I'm staring at it now. Does it look elliptical, in some sense -- in any sense? I confess uncertainty.

It's strange, though, in a way, to feel uncertain about such a thing. What, after all, is nearer to us than our own experience? Don't I have an immediate, privileged access to it? -- an access of the sort that many philosophers, at least since Descartes, have thought infallible or indubitable or incorrigible or at least overwhelmingly accurate? How could I go wrong about how things seem to me visually? And if I can't go wrong, then what is there to be uncertain of?

(Readers who know my work will know that much of what I've written recently is dedicated to undermining optimistic views about how well we know our own conscious experience. See especially here.)

Maybe part of what leads us to think the coin looks elliptical is that if I were to take a photograph of the scene from my perspective, or draw a painting of it, the image of the coin in that photograph or painting would be an ellipse. But we shouldn't infer straightaway from the ellipticality of the coin in a potential picture to its ellipticality in my experience. Perhaps, indeed, we only think the coin looks elliptical because we implicitly over-analogize our visual experience to pictures.

We over-analogize our experience to outward media and technology quite commonly I think. Alva Noe has argued, for example, that we often over-analogize vision to photography in taking our visual experience to have photographically rich detail deep into the periphery. I have argued that we over-analogize dreams to movies -- so much so that in the 1950s most Americans said they dreamed in black and white. Similarly, might we, in analogizing visual experience to snapshots and movies, go so far as mistakenly to attribute features of those outward technologies back into our experience?

It's interesting to note in this connection that in ancient Greece, where vision was commonly analogized to impressing a seal in wax and hardly ever analogized to painting, philosophers generally did not say that tilted coins look elliptical, that farther columns look smaller, etc. Epicurus went so far to assert positively that visual experiences have the same three-dimensional shape as the objects they are experiences of.

Of course we can tell by looking that a coin viewed obliquely would project as an ellipse upon an intervening plane -- but we can also tell by looking that it would project as a concave ellipsoid upon the surface of a sphere and that it would make a certain type of impression on wax from its current orientation. Could it only be because we implicitly think of vision as like photography that we're inclined to think of the first of these, rather than the other two, as the more adequate characterization of the experience of visual perspective?

(For fuller reflections on this topic see my essay "Do Things Look Flat?")

Monday, May 22, 2006

Do Tilted Coins Look Elliptical? (Part One)

Put a coin on your desk and look at it from an angle. Is there some sense in which it looks elliptical? Look out your window at a row of receding streetlights. Is there some sense in which the farther ones look smaller?

Many philosophers have said such things -- from Malebranche in the 17th century through contemporary philosophers of perception Michael Tye (e.g., here) and Alva Noe. But is this right?

One reason to have some doubts is this: It's just not clear what the geometry of such a view is supposed to be.

Tye and others have suggested that it involves something like projection onto a plane perpendicular to the line of sight: If you drew a straight line from your eye to the object, then interposed between yourself and that object a plane perpendicular to the line, what kind of shape would you have to put on that plane to perfectly occlude the object? An ellipse, in the case of the coin. A smallish figure in the case of a distant streetlight, a larger figure in the case of a nearer streetlights.

So far, so good. But the problem with doing the geometry that way is that lines projected onto the plane from objects off to the side will intersect the plane obliquely, with the consequence that they will appear much larger in the plane than their straight-ahead counterparts -- weirdly larger, if projective size is supposed to correspond to experienced size. My friend Glenn Vogel drew me up a figure that very nicely illustrates this point:


See how the sphere to the right makes a much bigger shadow in the plane?

One could avoid the problem of the projective size of objects off to the side by making the projective surface, between you and the objects, a sphere rather than a plane. Imagine bending the plane in the figure back to the right, wrapping it around until it was an even sphere encircling the point where the lines converge. Then the projective shadows would be the same size. Projecting onto a sphere rather than a plane would also respect the idea that apparent size varies with visual angle subtended.

But now we've lost our ellipse! The ellipse is a planar figure. Projecting onto a sphere generates not an ellipse but rather a concave ellipsoid.

Should we say, then, that strictly speaking the coin looks concave? That would be strange! In fact, that seems just plainly, observably false -- even with a larger circular object, like a plate, held close to the face so that its concavity given a spherical projection would be considerable.

So the puzzle remains: Is there some way to make sense of the geometry of a view according to which circular objects, viewed obliquely, look elliptical and distant things look smaller than their nearer counterparts? Despite the frequency with which philosophers say such things, no one has adequately explained how this is supposed to work.

(Steven Lehar has perhaps come closest, attempting to get very clear about the geometry; but I suspect that his view will end up making the coin look concave.)

In Part Two, I'll say a bit about what cultural influences and metaphors might be driving all this. You can also look at my essay here.

Friday, May 19, 2006

Can a State Be "Half-Conscious"?

I'm no fan of sharp lines. I'm deeply committed to the idea that the world -- especially the most complex parts of it, like the mind -- is thoroughly vague, blurry, splintered, dissociative, in-betweenish. (See for example my essays here and here and here.) But one thing I can't get my head around is an in-between state of consciousness -- a state of mind that is somewhere between being an experience and not being an experience. I see no theoretical reason to suppose such states can't exist; and given gradualism in the development and phylogeny of brains, there seems to be excellent reason to suppose there'd be a vague zone between conscious and nonconscious. But that idea, despite its appeal in the abstract, eludes my understanding when I try to reflect on it more deeply.

Of course there could be peripheral experiences, such as the experience of feeling your feet in your shoes when you're thinking about other things. (Maybe there aren't such experiences in fact, but that's a different question; at least they're conceivable.) Such states may in some sense be "less conscious" than experiences in focal attention, as it were. But, it seems to me, if you experience your feet in your shoes, no matter how peripherally, inarticulately, fuzzily, inattentively, then you genuinely experience them in that peripheral, inarticulate, fuzzy, inattentive way. If frogs (or ants or slugs or whatever) have the hazy beginnings of conscious experience -- say visual and tactile conscious experiences -- then it seems to me that they are genuinely conscious, in that hazy way. Either their stream of conscious is a total blank (i.e., there is no stream of conscious experience, for them) or it has some limited range of components. If the former, they have no conscious states; if they latter then they are conscious. I cannot envision a "between" state here.

Thus, it seems to me that "being conscious" is more like "having money" than "being red". (Does Searle say this somewhere?) Having money comes in degrees -- some people have more and some have less. But even one cent is money. Either you have money or you don't (setting aside issues like debt and illiquid goods). Being red also comes in degrees -- one thing can be redder than another -- but there are "in-between" states -- shades along the spectrum from red to purple, say, or red to orange, where it makes sense to say "Well, it's a vague matter whether this shade counts as red or not -- it depends on how one draws one's lines -- it's kind of between red and purple." What I don't see is how that could be the case for consciousness. Can we say of a state -- a peripheral conscious state in a human being, or a state in a frog, that whether it's a state that is experienced, whether it has "phenomenal character", is a vague issue, that it depends on how one draws one's lines, that it's kind of between having a phenomenal character and not having one?

Surely there are those who will say yes. And I'd like to say yes. But I can't quite figure out how this could be so (without adding something to "conscious" to change the meaning from that intended here -- like changing it to mean "self-conscious" or "acutely aware"). So I'm rather stuck. Is this just a failure of imagination?

Wednesday, May 17, 2006

How Many People *Really* Believe in God and Heaven?

Most people in the United States say they believe in God and Heaven. If all there is to believing something is being disposed sincerely to claim it, then I suppose they do believe. But what if believing something requires being disposed consistently to think and act in accord with one's belief? Then the matter becomes less clear.

Consider the implicit racist (discussed in an earlier blog entry "Do You Know If You're a Racist?") who says (for example) that "dark-skinned people are as intelligent as light-skinned people" but whose pattern of behavior, apart from her occasional avowals to the contrary, consistently reveals racist expectations -- she's surprised when an African-American says something smart; she expects, with no real basis, LeShaun to do poorly in her class; etc. I don't think we want to say that such a person really believes that dark-skinned people are as intelligent as light-skinned -- at best, she's in a muddled state somewhere between believing it and failing to believe it. (See also here my essay "In-Between Believing".)

I suspect most people who avow belief in God and Heaven are in a muddled, in-betweenish state of this sort. What you wouldn't do with a neighbor watching, you would do with God watching, when eternal bliss and suffering is at stake? One could posit massive irrationality here; but it seems easier and more plausible to me -- once one is comfortable with the idea of in-between beliefs and dissocations between avowals and one's real attitudes -- to suspect that such a person doesn't fully and completely, genuinely believe that God watches every move. Why is Hell less frightening than jail? Because, I suspect, belief in Hell is not fully written into the "believer's" structure of dispositions, reactions, and patterns of thought -- just as belief in the equality of the races is not written into the implicit racist's stucture of dispositions, reactions, and patterns of thought.

"Faith" can mean a lot of things. But perhaps we can think of one type of "faith" as merely sincere profession and hope, and desire to believe, without the fully saturated, implicit, taking-for-granted of the truth of the thing that is the province of full and genuine belief.

Monday, May 15, 2006

What Does It Mean to Say "Human Nature Is Good"?

I've been thinking a bit recently about the claim that "human nature is good", famously advocated by Rousseau in the West and by Mencius in ancient China.

Well, already that way of putting it is disputable! Is there one claim that both Rousseau and Mencius make? -- or, instead, when Rousseau says "la nature humaine est bonne" and Mencius says "xing shan", though the best English translation of their statements is "human nature is good", each means something quite different?

One might be led to this thought, especially, if one bears in mind the "state of nature" thought experiment that is given high prominence in many discussions of Rousseau. The "state of nature", per Rousseau, is a (perhaps fictional) state in which human beings exist without any societal or cultural ties. A certain reading of Rousseau's Discourse on Inequality might lead one to suppose that when Rousseau says "human nature is good" he means that people in the state of nature are good. If that's what Rousseau means, then he must mean something different from Mencius, since Mencius does not even contemplate "the state of nature" but always imagines people as thoroughly embedded in some society or other.

But I don't think we need to read Rousseau this way. For one thing, it saddles Rousseau with a strange view of the "natural". Human beings, of course, are naturally social -- like wolves and ants. No naturalist (not even in Rousseau's day!) would think to separate the wolf from the pack or the ant from its colony to determine its "natural" behavior. Human beings deprived of society -- like the occasionally discovered "wild child" -- will lack language and fear humans. But surely this isn't our "natural" state!

If we look to Mencius and to Rousseau's later work Emile (which Rousseau himself said was the key to understanding all the rest of his work) we see, I think, a developmental approach to the "natural". What is natural is what arises in a healthy process of normal development, in normal people, without external imposition. Both Rousseau and Mencius think morality arises from such a natural, healthy process; and that, I'd suggest is what is at the core of each of their claims about "human nature".

Interpreted in that way, and developed a bit (as both Rousseau and Mencius do develop it) the question whether "human nature is good" gains some specific, interesting, and empirically explorable content.

By the way, Hobbes and Xunzi (Rousseau’s and Mencius’s most famous opponents on the issue of human nature) would both say, I think, that morality is the result of external imposition, rather than something that emerges from within in a normal process of development, so this interpretation can get them right, too.

I’ve been revising an essay on this recently, which you can see here.

Friday, May 12, 2006

Why Do the Good Guys Always Win in Morality Tales?

When we're teaching children morality, we tell them tales of virtue and self-sacrifice where -- virtually always -- the person who was virtuous and self-sacrificing ends up better off materially in the end the person who is greedy and grasping. Whatever sacrifice the protagonist makes, whatever goal is not pursued due to moral qualms, is more than compensated in the end. The good guy refuses to cheat but wins the fight. Cinderella gets the prince. Virtue pays -- in golden coin!

Recently I've been reading through the ancient Chinese philosophers Mencius and Xunzi. And whenever I do, I'm struck by their repeated, seemingly Pollyannish, insistence that following the rules of morality will lead one to wealth and political success while breaking those rules will bring disaster. Their aim in saying such things is to win the hearts of vicious princes over to the path of morality.

Now maybe it's true that virtue pays better than vice in the long run -- that there's something paradoxically self-defeating about greed and something paradoxically profitable in self-sacrifice -- but that empirical question isn't what's troubling me now. The issue I want to raise is: Why is it so rare (except in the most subtle adult literature and film) to portray a virtuous protagonist as losing out because of her virtue, while still conveying the message that it's a good thing to be virtuous? (Minor characters are allowed to suffer for their virtue.)

Consider the end of Saving Private Ryan. The platoon shows mercy on a captured German, letting him go rather than executing him. The German comes back and kills one of them in a later battle. The audience seems invited to the conclusion that it was a mistake to have let him go, because of this bad consequence. The audience is not drawn to the conclusion (I think) that letting him go had been the right thing to do and simply had a bad outcome. Now maybe it was a mistake to let him go. Maybe it wasn't all things considered the right thing to do. But it doesn't seem to me that the fact that he comes back to do harm should close the question from the point of view of the narrative. And yet it does.

One might worry that incorporating the "virtue pays" structure so deeply into our morality tales risks teaching children that if virtue doesn't pay, it isn't really virtue. If we really want to inculcate an ethos of self-sacrifice and a willingness to do what's right regardless of the consequences, why don't we offer tales in which the protagonist may suffer or be rewarded for her virtuous behavior, but is admired nonetheless?

Now, I respect folk traditions of moral teaching enough to think that morality tales have the structure they do for excellent reason, and I have some preliminary thoughts about why that might be, but this seems to me a question that deserves more attention than it gets in treatments of moral education.

Wednesday, May 10, 2006

Can There Be Non-Obvious Illusions?















Consider the figure above. It is standardly presented as an example of the "Horizontal-Vertical" (or "Vertical-Horizontal") illusion. (This particular figure is scanned in from a classic text on illusions, Stanley Coren and Joan S. Girgus (1978), Seeing is Deceiving (Hillsdale, NJ: Erlbaum), p. 29.)

My question is: How do we know if there an illusion here? Coren and Girgus state simply that the vertical line is seen as being longer than the horizontal line. Yet I'm not sure I experience it that way. Of course the current context of presentation is somewhat less than ideal -- for example there are text and borders nearby that may compromise the illusion. But even in ideal circumstances, not everyone will swiftly say that the vertical line looks longer. In this respect, the current illusion is unlike more dramatic illusions, like the Poggendorf illusion, that command instant assent in most viewers.

If people are given the chance to choose from among an array of cross-like figures, ranging from those in which the vertical line is obviously longer to those in which the horizontal line is obviously longer, they may err in choosing a figure with the vertical line too short -- suggesting illusion. Or if given the chance to adjust the lengths of the lines until they seem the same size, they may also err. But does this mean that in the normal case, when they are simply presented with a perfect cross and they say that it's a perfect cross and say that it seems to them visually that it's a perfect cross, that they nonetheless are experiencing an illusion? -- an illusion they can't report and that doesn't fool them? That strikes me as rather strange!

On the other hand, maybe it shouldn't be surprising if people are often inexpert in discerning their own visual experience. It may take a bit of effort to learn to judge accurately whether a visual illusion is present or not -- and certainly there are cases (this may be one) in which I feel unsure whether I'm experiencing a visual illusion. If I'm unsure, doesn't that suggest the possibility that I might be making a mistake?

(For further reflections on this issue, see Introspective Training Apprehensively Defended.)

Monday, May 08, 2006

Development of the Moon Illusion?

Everybody knows the moon illusion: The moon looks bigger when it's near the horizon than when it's high overhead, despite the fact that it subtends the same visual angle in each case. This is part of what makes certain famous Ansel Adams photographs look weird. The illusion disappears if you view the moon through a paper towel roll, blocking out visual information about the horizon.

Various explanations of the moon illusion have been offered; I take no stand on them, but merely offer the following observation.

I was driving with my four-year-old son, Davy, along a stretch of road beside some hills. The moon was full. Because of the hills, the visual horizon from our perspective was wavy. As we drove, the horizon would rise up closer to the moon, as a hill rose to greet it, as it were, then drop away from the moon when there was a gap in the hills. The moon was very striking that night and I'd already noticed it, but had said nothing to Davy.

From the back seat, Davy said, "The moon is getting bigger and smaller!" I said, "It is?" and looked toward the moon again. It seemed to stay a constant size to me.

Now, one might expect Davy's experience to be the normal one, since the moon was sometimes nearer, sometimes farther from the visual horizon from our perspective. Perhaps that is the original experience of children, but adults learn to compensate somehow?

Friday, May 05, 2006

Is There an Experience of Thinking?

Think of the Prince of Wales. Now consider: Was there anything it was like to have such a thought? Maybe you formed a visual image of the Prince, or you heard the words "Prince Charles" in inner speech? Neither of these is tantamount to thinking of the Prince of Wales: You might have the visual image while thinking about someone else with similar looks (the Prince's twin, say, if he had one). You might have that bit of inner speech while thinking of someone else named Prince Charles.

So consider, again, whether there was anything in particular it was like to think about Prince Charles -- some conscious experience particular to that thought, not reducible to inner speech or visual imagery (though perhaps involving them as a part)? If you're like me, you'll find the answer to that question non-obvious. I'm not sure whether there's something it's like to have that thought, whether there is a distinctive phenomenology of cognition. Philosophers and psychologists who have considered the question have gone different ways on it. Most recently, Pitt and Siewert and Horgan have defended the existence of a distinctive phenomenology of thought, while Robinson (following perhaps the dominant trend historically) has denied that there's any distinctive phenomenology apart from imagery and maybe vague feelings of various sorts.

Now, isn't it a bit weird and surprising that this question should be so difficult to answer? After all, on standard philosophical (and commonsense) views of self-knowledge, we have excellent, privileged, perhaps even infallible access to our own currently ongoing conscious experience or phenomenology. What, it seems, could be easier than noting whether there's a distinctive phenomenology of thought or not? If there is such a phenomenology, how could we miss it? If there isn't, how could one reflect and come to think there is -- that is, invent an entire category of conscious experiences that simply don't exist?

You might say: There's no dispute about the experience. The phenomenology itself is obvious. The only dispute is about how to label or categorize the experience. Yet I don't think that's how the disputants see it. And it doesn't seem that way to me, when I reflect. I don't feel that I'm just puzzled about words. The very experience itself seems, to me, difficult to think about, understand, lay a hold of.

And yet it's so close, so immediate, so constant, so readily available, one might think....

Wednesday, May 03, 2006

Is Conscious Experience Rich or Thin? And How Does One Find Out?

Advocates of rich views of conscious experience, such as Searle (one of my graduate advisors) and William James think that there's a constant stream of conscious experience outside of attention -- that we have a constant peripheral consciousness of the hum of traffic in the background, of the feeling of our feet in our shoes, of subtle emotional conditions, etc. James (as he so often does) puts things beautifully:

The next thing to be noticed is this, that every one of the bodily changes, whatsoever it be, is felt, acutely or obscurely, the moment it occurs.... Our whole cubic capacity is sensibly alive; and each morsel of it contributesits pulsations of feeling, dim or sharp, pleasant, painful, or dubious, to that sense of personality that every one of us unfailingly carries with him (1890/1981, p. 1066-1067).

Advocates of thin views of consciousness (e.g., Dennett, Mack and Rock, Blackmore) deny this. When something is out of attention, it is generally not experienced -- not even in a secondary, peripheral way. One does not go through life constantly feeling one's shoes against one's feet. It's only when one thinks about one's feet that one experiences the tightness of one's shoes. It may seem that we have constant experience in our feet, but that's a "refrigerator light error": Just as a child might think the refrigerator light is always on because it's on whenever he checks it, so also someone might think we constantly experience our feet because we experience our feet whenever we think about whether there is experience in our feet.

This is a crucial, foundational issue in the study of consciousness, and in my view no one has given a satisfying argument one way or another on it. All arguments either rely upon doubtful (and conflicting) intuitive, armchair reports or they rely on experimental data that beg exactly the crucial question (by assuming one or another relationship between the reportability of a stimulus and whether it's consciously experienced).

The only way to get started addressing such a question, it seems to me, is to ask people about their experience -- but not about their current experience (which is subject to the refrigerator light problem). Rather, we should ask them to go about their normal business and then periodically interrupt them with questions like "were you, just then, experiencing your feet in your shoes or not?" But of course we can't follow people around: We should give them random beepers (a la Hurlburt) and prime them in advance to ponder one specific such question (was I having tactile experience in my feet? was I having visual experience?).

I did this. The results, are here -- in a rather long-winded paper, I'm afraid.

I'm also afraid that the beeper method is subject to all sorts of biases and distortions. But as far as I can see there is no better method for studying this question. (Suggestions welcome.)

Monday, May 01, 2006

The Problem of the Ethics Professors

Here's a question to consider: Why don't ethics professors behave better than they do?

The vast majority of philosophers I've polled think that ethics professors, on average, behave just about as ethically as their peers in logic, metaphysics, etc., and others of their socio-economic class generally, or they behave considerably worse. What might explain this fact, if it is a fact? The only explanations I can think of are either empirically implausible or disturbing in one way or another.

Here are some of the most obvious possible explanations:

(1.) Ethical reflection does not lead to ethical behavior. This might be because: (1a.) Ethical reflection reveals that moral behavior is not particularly advisable, or (1b.) Ethical reflection is impotent to effect one's general patterns of responding morally to the world.

(2.) Moral philosophers do not engage in ethical reflection -- or at least not the kind of ethical reflection relevant to everyday moral living. (This might seem plausible, in a way, since so much of ethics and moral philosophy is so abstract -- and yet still one might think or expect or at least hope that ethicists would be especially primed to see the moral dimensions in the everyday decisions they face.)

(3.) Ethical reflection does indeed lead to moral improvement, and the reason ethicists don't behave better than others is that they start out morally worse than the rest of us. They are, perhaps, drawn to ethics because morality is, as it were, a problem area in their lives. (There's something appealing in this thought -- and it harmonizes with the old joke that in psychology the crazy ones go into clinical psychology, the socially awkward ones into social psychology, etc., but really -- do you think we'll find patterns of, say, juvenile deliquency in the early behavior of ethics professors? Somehow I doubt it.)

Is there an appealing and plausible way out of this problem?