I want to say more about
that Psychology Today article on fearful conservatives vs. rational liberals, the one the Sanity Squad discussed in
its latest podcast.
But oh, where to begin, where to begin?
I'll start by disclosing my personal association with the article. Back in July, I got an email from an intern at the magazine, inviting me to be interviewed for a piece on political conversions. According to the email, the article was be entirely even-handed and nonpartisan, and would incorporate stories from both sides of the political spectrum about people whose viewpoints had changed. It sounded like fun, and definitely right up my alley.
But if you read the finished product, it turns out that the "change" stories have boiled down to just one, that of journalist and blogger
Cinnamon Stillwell, plus four short and superficial blurbs containing a couple of sentences apiece about four famous "changers" (yes, this part was an attempt at even-handedness, at least by the numbers: there were two righty-to-lefties and two lefty-to-righties: Brock, Huffington, Reagan, and Hitchens).
During my rather lengthy telephone interview with author Jay Dixit, he asked me many times whether my post-9/11 political change had been motivated by fear. I repeatedly explained that it had not, referring to my blog articles on change, and describing the process involved in some detail.
Certainly, I said, there had been brief moments of fear, but they were not predominant, and didn't last very long. Instead, it seemed to me that 9/11 had acted initially as a sort of shock to the system, a signal to me that there was a lot that I didn't understand about the world, and that learning more would be of vital importance and would help me know what actions to support as a response to the attack.
I said that reading had been a huge part of the process for me--and in due course I'd encountered books and articles from the conservative side, a point of view I hadn't studied in any depth up to that point (I was already familiar, of course, with the liberal point of view). I emphasized that for me the process of change was not sudden at all; it took several years, and was far more cognitive than emotional.
Looking back, it's clear to me that the questions Jay Dixit asked were designed to get me to focus on fear as a motivator. That's fine, since it turns out to be the main topic of the article. But it hardly seems unbiased or balanced to leave out a story (mine) that challenges the article's conclusions.
I can't know for certain what motivated the author to leave me out of the article entirely. Nor do I know whether there were others who were similarly left on the cutting room floor. But I can't help but wonder whether my interview was eliminated from the final product because I repeatedly gave answers that didn't fit in with the message the author wanted to deliver: that those who became more conservative were motivated by fear rather than rational thinking.
In addition, are fear and rationality mutually exclusive, anyway? As the Squad said on the podcast, fear is often adaptive and functional. After all, it evolved to warn us of dangers, so that we can respond appropriately. The real question is this: even if most post-9/11 "changers"
were motivated by fear (and the article presents no data on that particular question; I don't think anyone's done the research), was the danger realistic? If so, fear would be a rational initial response, and could lead to taking appropriate action to eliminate the danger. Denying the existence of a real danger is not only irrational, it can lead to the destruction of the denier.
Nowhere in the article are any of these issues dealt with, even on a superficial basis. And yet they are absolutely vital.
But the bulk of the article had nothing to do with this. The article as published was predominantly a summary of research studies purporting to study the differences between conservatives and liberals; to associate fearfulness and other (mostly negative) character traits with the former, and openness and flexibility (and, ultimately, rationality) with the latter; and to show that fear motivates people to become slightly more conservative in their responses.
I might have written that the finished piece represented an
analysis rather than a
summary of such research, but that didn't seem to be the proper word. In fact, the
Psychology Today article made no real attempt either to evaluate the research or critique it, nor to mention any research that might counter or negate it.
As I went through the article, flaws in the reasoning behind every piece of research cited came to mind. But to really understand the quality of a piece of research and to effectively critique its flaws, it's necessary to go to the source, the original paper itself. To do this for all the research cited in the article would be enough work for a small Ph.D. thesis. So, even though I'm known for my long posts, I'm not about going to be doing that today (sighs of relief all around).
Fortunately, the internet has come to my rescue, as it has so many times before. Someone known as IronShrink
has done some of the work for me, and for us.
IronShrink critiques one of the main pieces of research relied on in the article: Jost, Glaser, Kruglanski, and Sulloway's review of some 88 previous studies on conservatism. Finding fault with their study seems to have been a bit like shooting fish in a barrel for those well-versed in research methodology (
here's another take-down of the same Jost, et. al. article, this one written by Colorado State professor
C. Richard Jansen--who, by the way, is a research chemist and nutritionist rather than a social scientist).
Read both pieces, if you're interested in the details. Even in the slippery world of social science research, the Jost review's methodology seems particularly elusive (or perhaps the proper word would be "illusive"). Among other things, as both articles point out, the Jost researchers fail abysmally in their most elementary task, the basic definition of the terms they are studying--conservatism and liberalism.
Just to get a bit of the flavor of what we're talking about here, the Jost review apparently says that Stalin, although on the left, could be considered as a figure on the right because he wanted to defend and preserve the Soviet system. "
Conservative," get it?
Other methodological flaws are enumerated in some detail in both articles. Here's Jansen on the subject:
Jost and his colleagues carried out a meta-analysis of 88 studies involving 22,818 individual subjects in which approximately 27 discrete psychological variables were examined, according to the authors, in terms of the political orientation of the subjects...The methodology and software employed were not described, indeed in this paper there is not even a section entitled methodology or methods. Meta-analysis to be even valid much less successful should be based on a systematic review of the available literature, definition of terms, and a complete unbiased collection of original high quality studies that examine the same, not 27 variables in terms of 12 other variables.
This clearly was not done...[A] hodgepodge of variables were examined in studies involving mostly undergraduate students. The subjects, other than undergraduates were not adequately described, either qualitatively or quantitatively. Gender, age, race or ethnicity were not addressed. The authors describe no efforts to attest to the quality of the studies examined, or the biases potentially involved in the studies themselves or by the investigators, not to mention their own biases. Many of the studies quoted apparently were not peer reviewed since they were in monographs book chapters and conference papers.. The impression of statistical rigor is more apparent than real...I'm no expert on research; I haven't got a Ph.D. in the field. But I had to take courses at the graduate level in statistics and in designing and critiquing research, and I worked for a while as a research associate on a large project under some fairly well-known social science researchers. So I know enough to know that you shouldn't leave out important data--and if you do, it usually means you're covering up some more basic flaw in that data itself.
IronShrink goes into even greater detail than Jansen in
his piece about the Jost review article. I didn't read the original Jost research (it doesn't appear to be available online), but IronShrink has, and he's not impressed.
I did, however, read another piece of research discussed at length in the
Psychology Today article, the Block and Block study. You can find it online
here.
I've mentioned that I'm familiar with reading psychology research. I'm also well aware that it's almost spectacularly difficult to design it well, and easy to find fault with most such studies that are done. But even give that caveat, the Block research is almost shockingly poorly designed, especially in terms of sample.
This is the basic design: taking nursery school students in Berkeley and Oakland, California; testing them at the age of three (1969-1971) for certain personality traits; and then comparing the personalities of those judged to be liberal against those judged to be conservative years later, (around 1989), at the age of twenty-three.
So, what's wrong with this picture? Quite a bit, I'm afraid. The most serious problem is the nonrepresentativeness of the sample population. Then, as now, conservatives in Berkeley and Oakland were and are scarcer than hen's teeth. And these were
twenty-three-year-old conservatives in Berkeley and Oakland, growing up in the late 60s and 70s--an especially unusual bunch, I'd imagine. There is absolutely no reason to imagine that any conservatives found by this study would be typical or representative of conservatives as a whole; on the contrary. So the generalizability of the study would be highly suspect, to say the least, even if it were otherwise impeccably designed.
But it's actually much worse than that. When I looked at the figures, I encountered what I'll call the mysteriously missing data problem. There were 95 subjects, and when I looked to find one of the most elementary facts about them--how many had been defined as conservatives and how many as liberals--I discovered that Block and Block had failed to report the answer.
How odd. Because the authors had written in the body of their article that, "The LIB/CON [Liberal/Conservative] score distribution in this sample leans toward liberalism, with relatively few participants tilting toward conservatism."
Get that, folks? In this supposedly seminal study on the personality traits of conservatives, not only can we conclude that any youthful conservatives found in Berkeley and Oakland might be atypical in terms of the conservative population as a whole, but it appears possible that the authors
found hardly any conservatives at all. At any rate, they're not telling.
Note the authors' careful wording: there were "relatively few participants
tilting [my emphasis] towards conservatism." If you read the rest of the paper, it continually speaks of "
relatively liberal" and "
relatively conservative" [my emphasis again] subjects. Every now and then the authors slip into use of the terms "liberal" and "conservative" without the modifier, but for the most part the authors use the term "relatively." That fact, coupled with the glaring absence of the relevant data involved, leads me to conclude that
it is entirely possible that the study featured no conservatives at all.There's no way to know, of course. But the authors' careful hedginess about terminology such as "relatively," their mentioning the paucity of conservatives in the study, and, above all, the missing figures, make me very suspicious indeed. And, if there were few or no conservatives in their population, then what were the Blocks actually studying and comparing? The liberal and the less liberal, perhaps? The Left and the liberal? A worthy task, no doubt, but one that cannot possibly shed much light on conservatives. Because a relatively less Leftist liberal does not a conservative make--even in Berkeley.
But the point is not to attack Block. The point is that
Psychology Today, which should know better, breathes not a word of any of these problems or criticisms.
Social science research about politics needs to be especially rigorous because of its potential to reflect the bias of the researchers, whatever side they may be on. Such research is especially amenable to being used (and misused) to score political points, as propaganda. And that's something
Psychology Today ought to have been well aware of, and to have guarded assiduously against. Unfortunately, the editors appear to have failed abysmally at that task.
[NOTE: Here's
a great email another blogger, Asher Abrams, sent to Dixit. And
here's Cinnamon Stillwell's own take on the article. Others speaking out are
Fausta,
Shrink, and
Dr. Sanity. And
here's a discussion at Eugene Volokh's by a researcher named Jim Lindgren, who agrees with me on the problem of sample representativeness in the Block study.]