Wednesday, November 24, 2010

Professors' Moral Attitudes about Responding to Student Emails Are Almost Completely Unrelated to Their Actual Responsiveness to Student Emails

... or so say Josh Rust and I in an article were are busily writing up.  (We reported some of the data in an earlier blog post here.)

Below is my favorite figure from the current draft.  On the x-axis is professors' expressed normative view about the morality or immorality of "not consistently responding to student emails", in answer to a survey question, with answers ranging from 1 ("very morally bad") through 5 ("morally neutral") to 9 ("very morally good").  (In fact, only 1% of respondents answered on the morally good side of the scale, showing that we aren't entirely bonkers.)  On the y-axis is responsiveness to three emails Josh and I sent to those same survey respondents -- emails designed to look as through they were from undergraduates, asking questions about such things as office hours and future courses.

(I can't seem to get my graphs to display quite right in Blogger.  If the graph is cut off, please click to view the whole thing.  The triplets of bars represent ethicists, non-ethicist philosophers, and professors in departments other than philosophy, respectively.)

Tuesday, November 16, 2010

Carruthers and Schwitzgebel on Knowledge of Attitudes

... a Philosophy TV dialogue that came out a couple of weeks ago, but which I forgot to link to at the time.

Peter and I both deny that we have privileged self-knowledge of our attitudes (at least in any strong sense of "privilege"), but since we're philosophers we still find plenty to disagree about!

Thursday, November 11, 2010

The Phenomenology of Being a Jerk

Most jerks, I assume, don't know that they're jerks. This raises, of course, the question of how you can find out if you're a jerk. I'm not especially optimistic on this front. In the past, I've recommended simple measures like the automotive jerk-sucker ratio -- but such simple measures are so obviously flawed and exception-laden that any true jerk will have ample resources for plausible rationalization.

Another angle into this important issue -- yes, I do think it is an important issue! -- is via the phenomenology of being a jerk. I conjecture that there are two main components to the phenomenology:

First: an implicit or explicit sense that you are an "important" person -- in the comparative sense of "important" (of course, there is a non-comparative sense in which everyone is important). What's involved in the explicit sense of feeling important is, to a first approximation, plain enough. The implicit sense is perhaps more crucial to jerkhood, however, and manifests in thoughts like the following: "Why do I have to wait in line at the post office with all the schmoes?" and in often feeling that an injustice has been done when you have been treated the same as others rather than preferentially.

Second: an implicit or explicit sense that you are surrounded by idiots. Look, I know you're smart. But human cognition is in some ways amazingly limited. (If you don't believe this, read up on the Wason selection task.) Thinking of other people as idiots plays into jerkhood in two ways: The devaluing of others' perspectives is partly constitutive of jerkhood. And perhaps less obviously, it provides a handy rationalization of why others aren't participating in your jerkish behavior. Maybe everyone is waiting their turn in line to get off the freeway on a crowded exit ramp and you (the jerk) are the only one to cut in at the last minute, avoiding waiting your turn (and incidentally increasing the risk of an accident and probably slowing down non-exiting traffic). If it occurs to you to wonder why the others aren't doing the same you have a handy explanation in your pocket -- they're idiots! -- which allows you to avoid more uncomfortable potential explanations of the difference between you and them.

Here's a self-diagnostic of jerkhood, then: How often do you think of yourself as important, how often do you expect preferential treatment, how often do you think you are a step ahead of the idiots and schmoes? If this is characteristic of you, I recommend that you try to set aside the rationalizations for a minute and do a frank self-evaluation. I can't say that I myself show up as well by this self-diagnostic as I would have hoped.

How about the phenomenology of being a sweetie -- if we may take that as the opposite of a jerk? Well, here's one important component, I think: Sweeties feel responsible for the well-being of the people around them. These can be strangers who drop a folder full of papers, job applicants who are being interviewed, their own friends and family.

In my effort to move myself a little be more in the right direction along the jerk-sweetie spectrum, I am trying to stir up in myself more of that feeling of responsibility and to remind myself of my fallible smallness.

Thursday, November 04, 2010

Not By Argument Alone (by Guest Blogger G. Randolph Mayes)

I just gave a talk at Gonzaga University called “Not by Argument Alone” in which I tried to show how explanatory reasoning figures into the resolution of philosophical problems. It begins with the observation that we sometimes have equally good reasons for believing contradictory claims. This is the defining characteristic of philosophical antinomies, but it is a common feature of everyday reasoning as well.

For example, Frank told me to meet him at his office at 3 PM if I wanted a ride home. But I’ve been waiting for 15 minutes now and still no Frank. This problem can be represented as a contradiction of practical significance: Frank both will and will not be giving me a ride home. One of these claims must go. The problem is that I have very good reasons for believing both. Frank is a very reliable friend, as is my memory for promises made. On the other hand, my ability to observe the time of day and the absence of Frank at that time and location is quite reliable as well.

So how do I decide which claim to toss? I consider the possibility that Frank is not coming, but this immediately raises the following question: Why not? (He forgot; he lied, he was mugged; I am late?) I consider the possibility that Frank will still show. This immediately raises another question: Why isn’t he here? ( He was delayed; I am early; he is here but I don’t see him?) Both of these questions are requests for explanations and producing good answers to them is essential to the rational resolution of the contradiction. Put differently, I should deny the claim whose associated explanation questions I am best capable of answering.

This is one way of explicating the view that rational belief revision depends on considerations of ‘explanatory coherence.’ The idea is typically traced to Wilfrid Sellars, and it has since been developed along epistemological, psychological, and computational lines. Oddly, however, it has not been explored much as a model for the resolution of philosophical questions. I don’t know why, but I speculate that it is because philosophers don’t naturally represent philosophical thinking in explanatory terms. Typically, a philosophical ‘theory’ is represented not so much as a proposed explanation of some interesting fact as it is a proposed analysis of some problematic concept.

In my view, though, philosophers engage in the creation of explanatory hypotheses all the time. Consider the traditional problem of perception. Just about everyone agrees that we perceive objects. But whereas the physicalist argues that we perceive independently existing physical objects, the phenomenalist is equally persuasive that the objects of perception are mind-dependent. Again, one claim must go. Suppose we deny the phenomenalist’s claim. But then how do we explain illusions and hallucinations, which are phenomenologically indistinguishable from physical objects? Suppose we deny the physicalist’s claim. But then how do we explain the origin of experience itself?

When we explicitly acknowledge that explanation is a necessary step in philosophical inquiry, we thereby acknowledge the responsibility to identify criteria for evaluating the explanations that we propose. Too often philosophical theories are defended simply on the basis of their intuitive appeal. But why would we expect this to reflect anything more than our intuitive preference for believing the claims that they preserve? In science, the ability of a theory to explain things we already know is a paltry achievement. A good explanation must successfully predict novel phenomena or unify familiar phenomena not previously known to be related. Are philosophical explanations subject to the same criteria? If so, then let’s explicitly apply them. If not, well, then I think we’ve got some explaining to do.

This is my last post! Thanks very much for reading and thanks especially to Eric for giving me this opportunity to float some of my thoughts on The Splintered Mind.