Friday, July 31, 2015

Against Intellectualism about Belief

Sometimes what we sincerely say -- aloud or even just silently to ourselves -- doesn't fit with the rest of our cognition, reactions, and behavior. Someone might sincerely say, for example, that women and men are equally intelligent, but be consistently sexist in his assessments of intelligence. (See the literature on implicit bias.) Someone might sincerely say that her dear friend has gone to Heaven, while her emotional reactions don't at all fit with that.

On intellectualist views of belief, what we really believe is the thing we sincerely endorse, despite any other seemingly contrary aspects of our psychology. On the more broad-based view I prefer, what you believe depends, instead, on how you act and react in a broad range of ways, and sincere endorsements are only one small part of the picture.

Intellectualism might be defended on four grounds.

(1.) Intellectualism might be intuitive. Maybe the most natural or intuitive thing to say about the implicit sexism case is that the person really believes that women are just as smart; he just has trouble putting that belief into action. The person really believes that her friend is in Heaven, but it's hard to avoid reacting emotionally as if her friend is ineradicably dead rather than just "departed".

Reply: Sometimes we do seem to want to say that people believe what they intellectually endorse in cases like this, but I don't think our intuitions are univocal. It can also seem natural or intuitive to say that the implicit sexist doesn't really or wholly or deep-down believe that the sexes are equal, and that the mourner maybe has more doubt about Heaven than she is willing to admit to herself. So the intuitive case could go either way.

(2.) Intellectualism might fit well with our theoretical conceptualization of belief. Maybe it's in the nature of belief to be responsive to evidence and deployable in reasoning. And maybe only intellectually endorsed or endorsable states can play that cognitive role. The implicit sexist's bias might be insufficiently responsive to evidence and insufficiently apt to be deployed in reasoning for it to qualify as belief, while her intellectual endorsement is responsive to evidence and deployable in reasoning.

Reply: Zimmerman and Gendler, in influential essays, have nicely articulated versions of this defense of intellectualism [caveat: see Zimmerman's comment below]. I raised some objections here, and Jack Marley-Payne has objected in more explicit detail, so I won't elaborate in this post. Marley-Payne's and my point is that people's implicit reactions are often sensitive to evidence and deployable in what looks like reasoning, while our intellectual endorsements are often resistant to evidence and rationally inert -- so at least it doesn't seem that there's a sharp difference in kind.

(It was Marley-Payne's essay that got me thinking about this post, I should say. We'll be discussing it, also with Keith Frankish, in September for Minds Online 2015.)

(3.) Intellectualism about belief might cohere well with the conception of "belief" generally used in current Anglophone philosophy. Epistemologists commonly regard knowledge as a type of belief. Philosophers of action commonly think of beliefs coupling with desires to form intentions. Philosophers of language discuss the weird semantics of "belief reports" (such as "Lois believes that Superman is strong" and "Lois believes that Clark Kent is not strong"). Possibly, an intellectualist approach to belief fits best with existing work in these other areas of philosophy.

Reply: I concede that something like intellectualism seems to be presupposed in much of the epistemological literature on knowledge and much of the philosophy-of-language literature on belief reports. However, it's not clear that philosophy of action and moral psychology are intellectualistic. Philosophy of action uses belief mainly to explain what people do, not what they say. For example: Why did Ralph, the implicit sexist, reject Linda for the job? Well, maybe because he wants to hire someone smart for the job and he doesn't think women are smart. Why does the mourner feel sorry for the deceased? Maybe because she doesn't completely accept that the deceased is in Heaven.

Furthermore, maybe coherence with intellectualist views of belief in epistemology and philosophy of language is a mistaken ideal and not in the best interest of the discipline as a whole. For example, it could be that a less intellectualist philosophy of mind, imported into philosophy of language, would help us better see our way through some famous puzzles about belief reports.

(4.) Intellectualism might be the best practical choice because of its effects on people's self-understanding. For example, it might be more effective, in reducing unjustified sexism, to say to an implicit sexist, "I know you believe that women are just as smart, but look at all these spontaneous responses you have" than to say "I know you are sincere when you say women are just as smart, but it appears that you don't through-and-through believe it". Tamar Gendler, Aaron Zimmerman, and Karen Jones have all defended attribution of egalitarian beliefs partly on these grounds, in conversation with me.

Reply: I don't doubt that Gendler, Zimmerman, and Jones are right that many people will react negatively to being told they don't entirely or fully possess all the handsome-sounding egalitarian and spiritual beliefs they think they have. (Neither, would I say, do they entirely lack the handsome beliefs; these are "in-between" cases.) They'll react more positively, and be more open to rigorous self-examination perhaps, if you start on a positive note and coddle them a bit. But I don't know if I want to coddle people in this way. I'm not sure it's really the best thing in the long term. There's something painfully salutary in thinking to yourself, "Maybe deep down I don't entirely or thoroughly believe that women (or racial minorities, or...) are very smart. Similarly, maybe my spiritual attitudes are also mixed up and multivocal." This is a more profound kind of self-challenge, a fuller refusal to indulge in self-flattery. It highlights the uncomfortable truth that our self-image is often ill-tuned to reality.

------------------------------------------

Although all four defenses of intellectualism have some merit, none is decisive. This tangle of reasons leaves us in approximately a tie so far. But we haven't yet come to...

The most important reason to reject intellectualism about belief:

Given the central role of the term "belief" in philosophy of mind, philosophy of action, epistemology, and philosophy of language, we should reserve the term for the most important thing in the vicinity.

Both intellectualism and broad-based views have some grounding in ordinary and philosophical usage. We are at liberty to choose between them. Given that choice, we should prefer the account that picks out the aspect of our psychology that most deserves the central role that "belief" plays in philosophy and folk psychology.

What we sincerely say, what we intellectually endorse, is important. But it is not as important as how we live our way through the world generally. What I say about the intellectual equality of the sexes is important, but not as important as how I actually treat people. My sincere endorsements of religious or atheistic attitudes are important, but they are only a small slice of my overall religiosity or lack of religiosity.

On a broad-based view of belief, to believe that the sexes are equal, or that Heaven exists, or that snow is white, is to steer one's way through the world, in general, as though these propositions are true, not only to be disposed to say they are true. It is this overall pattern of self-steering that we should care most about, and to which we should, if we can do so without violence, attach the philosophically important term "belief".

[image source]

Tuesday, July 28, 2015

Podcast Interview of Me, about Ethicists' Moral Behavior

... other topics included rationalization and confronting one's moral imperfection,

at Rationally Speaking.

Thanks, Julia, for your terrific, probing questions!

Friday, July 24, 2015

Cute AI and the ASIMO Problem

A couple of years ago, I saw the ASIMO show at Disneyland. ASIMO is a robot designed by Honda to walk bipedally with something like the human gait. I'd entered the auditorium with a somewhat negative attitude about ASIMO, having read Andy Clark's critique of Honda's computationally-heavy approach to robotic locomotion (fuller treatment here); and the animatronic Mr. Lincoln is no great shakes.

But ASIMO is cute! He's about four feet tall, humanoid, with big round dark eyes inside what looks a bit like an astronaut's helmet. He talks, he dances, he kicks soccer balls, he makes funny hand gestures. On the Disneyland stage, he keeps up a fun patter with a human actor. ASIMO's gait isn't quite human, but his nervous-looking crouching run only makes him that much cuter. By the end of the show I thought that if you gave me a shotgun and told me to blow off ASIMO's head, I'd be very reluctant to do so. (In contrast, I might quite enjoy taking a shotgun to my darn glitchy laptop.)

Another case: ELIZA was a simple computer program written in the 1960s that would chat with a user, using a small template of pre-programmed responses to imitate a non-directive psychotherapist (“Are such questions on your mind often?”, “Tell me more about your mother.”) Apparently, some users mistook it for human and spent long periods chatting with it.

I assume that ASIMO and ELIZA are not proper targets of substantial moral concern. They have no more consciousness than a laptop computer, no more capacity for genuine joy and suffering. However, because they share some of the superficial features of human beings, people might come improperly to regard them as targets of moral concern. And future engineers could presumably create entities with an even better repertoire of superficial tricks. Discussing this issue with my sister, she mentioned a friend who had been designing a laptop that would scream and cry when its battery runs low. Imagine that!

Conversely, suppose that it's someday possible to create an Artificial Intelligence so advanced that it has genuine consciousness, a genuine sense of self, real joy, and real suffering. If that AI also happens to be ugly or boxy or poorly interfaced, it might tend to attract less moral concern than is warranted.

Thus, our emotional responses to AIs might be misaligned with the moral status of those AIs, due to superficial features that are out of step with the AI's real cognitive and emotional capacities.

In the Star Trek episode "The Measure of a Man", a scientist who wants to disassemble the humanoid robot Data (sympathetically portrayed by a human actor) says of the robot, "If it were a box on wheels, I would not be facing this opposition." He also points out that people normally think nothing of upgrading the computer systems of a starship, though that means discarding a highly intelligent AI.

I have a cute stuffed teddy bear I bring to my philosophy of mind class on the day devoted to animal minds. Students scream in shock when without warning in the middle of the class, I suddenly punch the teddy bear in the face.

Evidence from developmental and social psychology suggests that we are swift to attribute mental states to entities with eyes and movement patterns that look goal directed, much slower to attribute mentality to eyeless entities with inertial movement patterns. But of course such superficial features needn’t track underlying mentality very well in AI cases.

Call this the ASIMO Problem.

I draw two main lessons from the ASIMO Problem.

First is a methodological lesson: In thinking about the moral status of AI, we should be careful not to overweight emotional reactions and intuitive judgments that might be driven by such superficial features. Low-quality science fiction -- especially low-quality science fiction films and television -- does often rely on audience reaction to such superficial features. However, thoughtful science fiction sometimes challenges or even inverts these reactions.

The second lesson is a bit of AI design advice. As responsible creators of artificial entities, we should want people to neither over- nor under-attribute moral status to the entities with which they interact. Thus, we should generally try to avoid designing entities that don’t deserve moral consideration but to which normal users are nonetheless inclined to give substantial moral consideration. This might be especially important in the design of children’s toys: Manufacturers might understandably be tempted to create artificial pets or friends that children will love and attach to -- but we presumably don’t want children to attach to a non-conscious toy instead of to parents or siblings. Nor do we presumably want to invite situations in which users might choose to save an endangered toy over an an endangered human being!

On the other hand, if we do someday create genuinely human-grade AIs who merit substantial moral concern, it would probably be advisable to design them in a way that would evoke the proper range of moral emotional responses from normal users.

We should embrace an Emotional Alignment Design Policy: Design the superficial features of AIs in such a way that they evoke the moral emotional reactions are appropriate to the real moral status of the AI, whatever it is, neither more nor less.

(What is the real moral status of AIs? More soon! In the meantime, see here and here.)

[image source]

Sunday, July 19, 2015

Philosophy Via Facebook? Why Not?

An adapation of my June blog post What Philosophical Work Could Be, in today's LA Times.

--------------------------------------

Academic philosophers tend to have a narrow view of what is valuable philosophical work. Hiring, tenure, promotion and prestige depend mainly on one's ability to produce journal articles in a particular theoretical, abstract style, mostly in reaction to a small group of canonical and 20th century figures, for a small readership of specialists. We should broaden our vision.

Consider the historical contingency of the journal article, a late-19th century invention. Even as recently as the middle of the 20th century, leading philosophers in Western Europe and North America did important work in a much broader range of genres: the fictions and difficult-to-classify reflections of Sartre, Camus and Unamuno; Wittgenstein's cryptic fragments; the peace activism and popular writings of Bertrand Russell; John Dewey's work on educational reform.

Popular essays, fictions, aphorisms, dialogues, autobiographical reflections and personal letters have historically played a central role in philosophy. So also have public acts of direct confrontation with the structures of one's society: Socrates' trial and acceptance of the hemlock; Confucius' inspiring personal correctness.

It was really only with the generation hired to teach the baby boomers in the 1960s and '70s that academic philosophers' conception of philosophical work became narrowly focused on the technical journal article.

continued here.

Tuesday, July 14, 2015

The Moral Lives of Ethicists

[published today in Aeon Magazine]

None of the classic questions of philosophy are beyond a seven-year-old's understanding. If God exists, why do bad things happen? How do you know there's still a world on the other side of that closed door? Are we just made of material stuff that will turn into mud when we die? If you could get away with killing and robbing people just for fun, would you? The questions are natural. It's the answers that are hard.

Eight years ago, I'd just begun a series of empirical studies on the moral behavior of professional ethicists. My son Davy, then seven years old, was in his booster seat in the back of my car. "What do you think, Davy?" I asked. "People who think a lot about what's fair and about being nice – do they behave any better than other people? Are they more likely to be fair? Are they more likely to be nice?"

Davy didn’t respond right away. I caught his eye in the rearview mirror.

"The kids who always talk about being fair and sharing," I recall him saying, "mostly just want you to be fair to them and share with them."

When I meet an ethicist for the first time – by "ethicist", I mean a professor of philosophy who specializes in teaching and researching ethics – it's my habit to ask whether ethicists behave any differently to other types of professor. Most say no.

I'll probe further: Why not? Shouldn't regularly thinking about ethics have some sort of influence on one’s own behavior? Doesn't it seem that it would?

To my surprise, few professional ethicists seem to have given the question much thought. They'll toss out responses that strike me as flip or are easily rebutted, and then they'll have little to add when asked to clarify. They'll say that academic ethics is all about abstract problems and bizarre puzzle cases, with no bearing on day-to-day life – a claim easily shown to be false by a few examples: Aristotle on virtue, Kant on lying, Singer on charitable donation. They'll say: "What, do you expect epistemologists to have more knowledge? Do you expect doctors to be less likely to smoke?" I'll reply that the empirical evidence does suggest that doctors are less likely to smoke than non-doctors of similar social and economic background. Maybe epistemologists don’t have more knowledge, but I'd hope that specialists in feminism would exhibit less sexist behavior – and if they didn't, that would be an interesting finding. I'll suggest that relationships between professional specialization and personal life might play out differently for different cases.

It seems odd to me that our profession has so little to say about this matter. We criticize Martin Heidegger for his Nazism, and we wonder how deeply connected his Nazism was to his other philosophical views. But we don’t feel the need to turn the mirror on ourselves.

The same issues arise with clergy. In 2010, I was presenting some of my work at the Confucius Institute for Scotland. Afterward, I was approached by not one but two bishops. I asked them whether they thought that clergy, on average, behaved better, the same or worse than laypeople.

"About the same," said one.

"Worse!" said the other.

No clergyperson has ever expressed to me the view that clergy behave on average morally better than laypeople, despite all their immersion in religious teaching and ethical conversation. Maybe in part this is modesty on behalf of their profession. But in most of their voices, I also hear something that sounds like genuine disappointment, some remnant of the young adult who had headed off to seminary hoping it would be otherwise.

In a series of empirical studies – mostly in collaboration with the philosopher Joshua Rust of Stetson University – I have empirically explored the moral behavior of ethics professors. As far as I'm aware, Josh and I are the only people ever to have done so in a systematic way.

Here are the measures we looked at: voting in public elections, calling one's mother, eating the meat of mammals, donating to charity, littering, disruptive chatting and door-slamming during philosophy presentations, responding to student emails, attending conferences without paying registration fees, organ donation, blood donation, theft of library books, overall moral evaluation by one's departmental peers based on personal impressions, honesty in responding to survey questions, and joining the Nazi party in 1930s Germany.

[continued in the full article here]

Wednesday, July 08, 2015

Profanity Inflation, Profanity Migration, and the Paradox of Prohibition

As a fan of profane language judiciously employed, I fear that the best profanities of English are cheapening from overuse -- or worse, that our impulses to offend through profane language are beginning to shift away from harmless terms toward more harmful ones.

I am inspired to these thoughts by Rebecca Roache's recent Philosophy Bites podcast on swearing.

Roache distinguishes between objectionable slurs (especially racial slurs) and presumably harmless swear words like "fuck". The latter words, she suggests, should not be forbidden, although she acknowledges that in some contexts it might be inappropriate to use them. Roache also suggests that it's silly to forbid "fuck" while allowing obvious replacements like "f**k" or "the f-word". Roache says, "We should swear more, and we shouldn't use asterisks, and that's fine." (31:20).

I disagree. Overstating somewhat, I disagree because of this:

"Fuck" is a treasure of the English language. Speakers of other languages will sometimes even reach across the linguistic divide to relish its profanity. "Fuck" is a treasure precisely because it is forbidden. Its being forbidden is the source of its profane power and emotional vivacity.

When I was growing up in California in the 1970s, "fuck" was considered the worst of the seven words you can't say on TV. You would never hear it in the media, or indeed -- in my posh little suburb -- from any adults, except maybe, very rarely, from some wild man from somewhere else. I don't think I heard my parents or any of their friends say the word even once, ever. It wasn't until fourth grade that I learned that the word existed. What a powerful word, then, for a child to relish in the quiet of his room, or to suddenly drop on a friend!

"Fuck" is in danger. Its power is subsiding from its increased usage in the public sphere. Much as the overprinting of money devalues it, profanity inflation risks turning "fuck" into another "damn". The hundred-dollar-bill of swear words doesn't buy as much shock as it used to. (Yes, I sound like an old curmudgeon -- but it's true!)

Okay, a qualification: I'm pretty sure what I've just said is true for the suburban California dialect; but I'm also pretty sure "fuck" was never so powerful in some other dialects. Some evidence of its increased usage overall, and its approach toward "damn", is this Google NGram of "fuck", "shit", and "damn" in "lots of books", 1960-2008:

[click to enlarge]

A further risk: As "fuck" loses its sting and emotional vivacity, people who wish to use more vividly offensive language will find themselves forced to other options. The most offensive alternative options currently available in English are racial slurs. But unlike "fuck", racial slurs are plausibly harmful in ordinary use. The cheapening of "fuck" thus risks forcing the migration of profanity to more harmful linguistic locations.

The paradox of prohibition, then: If the woman in the eCard above wishes to preserve the power of her favorite word, she should cheer for it to remain forbidden. She should celebrate, not bemoan, the existence of standards against the use of "fuck" on major networks, the awarding of demerits for its use in school, and its almost complete avoidance by responsible adults in public contexts. Conversely, some preachers might wish to encourage the regular recitation of "fuck" in the preschool curriculum. (Okay, that last remark was tongue in cheek. But still, wouldn't it work?)

Despite the substantial public interest in retaining the forbidden deliciousness of our best swear word, I do think that since the word is in fact (pretty close to) harmless, severe restrictions would be unjust. We must really only condemn it with the forgiving standards we usually apply to etiquette violations, even if this results in the term's not being quite as potent as it otherwise would be.

Finally, let me defend usages like "f**k" and "the f-word". Rather than being silly avoidances because we all know what we're talking about, such decipherable maskings communicate and reinforce the forbiddenness of "fuck". Thus, they help to sustain its power as an obscenity.

[image source]

Thursday, July 02, 2015

Why In-Between Belief Is Worse Than In-Between Extraversion

For twenty years, I've been advocating a dispositional account of belief, according to which to believe that P is to match, to an appropriate degree and in appropriate respects, a "dispositional stereotype" characteristic of the belief that P. In other words: All there is to believing that P is being disposed, ceteris paribus (all else equal or normal or right), to act and react, internally and externally, like a stereotypical belief-that-P-er.

Since the beginning, two concerns have continually nagged at me.

One concern is the metaphysical relation between belief and outward behavior. It seems that beliefs cause behavior and are metaphysically independent of behavior. But it's not clear that my dispositional account allows this -- a topic for a future post.

The other concern, my focus today, is this: My account struggles to explain what has gone normatively wrong in many "in-between" cases of belief.

The Concern

To see the worry, consider personality traits, which I regard as metaphysically similar to beliefs. What is it to be extraverted? It is just to match, closely enough, the dispositional stereotype that we tend to associate with being extraverted -- that is, to be disposed to enjoy parties, to be talkative, to like meeting new people, etc. Analogously, on my view, to believe there is beer in the fridge is, ceteris paribus, to be disposed to go to the fridge if one wants a beer, to be disposed to feel surprise if one were to open the fridge and find no beer, to answer "yes" when asked if there is beer in the fridge, etc.

One interesting thing about personality traits is that people are rarely 100% extravert or 100% introvert, rarely 100% high-strung or 100% mellow. Rather, people tend to be between the extremes, extraverted in some respects but not in others, or in some types of contexts but not in others. One feature of my account of belief which I have emphasized from the beginning is that it easily allows for the analogous in-betweenness: We often match only imperfectly, and in some respects, the stereotype of the believer in racial equality, or of the believer in God, or of the believer that the 19th Street Bridge is closed for repairs. ("The Splintered Mind"!)

The worry, then is this: There seems to be nothing at all normatively wrong -- no confusion, no failing -- with being an in-between extravert who has some extraverted dispositions and other introverted ones; while in contrast it does seem that typically something has gone wrong in structurally similar cases of in-between believing. If some days I feel excited about parties and other days I loathe the thought, with no particular excuse or explanation for my different reactions, no problem, I'm just an in-between extravert. In contrast, if some days I am disposed to act and react as if Earth is third planet from the Sun and other days I am disposed to act and react as if it is the fourth, with no excuse or explanation, then something has gone wrong. Being an in-between extravert is typically not irrational; being an in-between believer typically is irrational. Why the difference?

My Answer

First, it's important not to exaggerate the difference. Too arbitrary an arrangement of, or fluctuation in, one's personality dispositions does seem at least a bit normatively problematic. If I'm disposed to relish the thought of a party when the wall to my left is beige and to detest the thought of a party when the wall to my left is truer white, without any explanatory story beneath, there's something weird about that -- especially if one accepts, as I do, following McGeer and Zawidzki, that shaping oneself to be comprehensible to others is a central feature of mental self-regulation. And on the other hand, some ways of being an in-between believer are entirely rational: for example, having an intermediate degree of confidence or having procedural "how to" knowledge without verbalizable semantic knowledge. But this so far is not a full answer. Wild, inexplicable patterns still seem more forgivable for traits like extraversion than attitudes like belief.

A second, fuller reply might be this: There is a pragmatic or instrumental reason to avoid wild splintering of one's belief dispositions that does not apply to the case of personality traits. It's good (at least instrumentally good, maybe also intrinsically good?) to be a believer of things, roughly, because it's good to keep track of what's going on in one's environment and to act and react in ways that are consonant with that. Per impossibile, if one were faced with the choice of whether or not to be a creature with the capacity to form dispositional structures in response to evidence that stay mostly stable, except under the influence of new evidence, and which guide one's behavior accordingly, vs. being a creature without the capacity to form such evidentially stable dispositional structures, it would be pragmatically wise to choose to be the former. On average, plausibly, one would live longer and attain more of one's goals. So perhaps the extra normative failing in wildly splintering belief dispositions derives from that. An important part of the value of having stable belief-like dispositional sets is to guide behavior in response to evidence. In normatively defective in-between cases, that value isn't realized. And if one explicitly embraces wild in-betweenness in belief, one goes the extra step of thumbing one's nose at such structures, when one could, instead, try to employ them toward one's ends.

Whether these two answers are jointly sufficient to address the concern, I haven't decided.

[Thanks to Sarah Paul and Matthew Lee for discussion.]

[image source]