Saturday, October 31, 2015
Why Arthur C. Brooks Is Afraid of Research
Today's New York Times carries an Op Ed by Arthur C. Brooks titled "Academia's Rejection of Diversity." Brooks is the President of the American Enterprise Institute and, as the NYT puts it, a contributing opinion writer. Must be nice to be a contributing opinion writer.
If, like me, you have concerns about academia's faltering efforts in the area of diversity, you would probably make a point of reading this op ed. If, like me, you've read Brooks in the past, you might expect that this op ed would be yet another attack on the supposed liberal bias of the university. The argument is, once again, that academia has no interest in diversity of ideas. Somehow this argument is always constructed around partisan identifications rather than around arguments that truly address diversity of ideas.
But here's the real issue -- Like so many others who have advanced this argument over the decades, Brooks plays fast and loose with his examples. Take this paragraph, for instance:
" In one classic experiment from 1975, a group of scholars was asked to evaluate one of two research papers that used the same statistical methodology to reach opposite conclusions. One version “found” that liberal political activists were mentally healthier than the general population; the other paper, otherwise identical, was set up to “prove” the opposite conclusion. The liberal reviewers rated the first version significantly more publishable than its less flattering twin."
In the preceding discussion, Brooks had named a particular researcher in the Netherlands who had faked data. Brooks offers no evidence that this fraud had anything at all to do with the subject Brooks is discussing. He concedes that ideologically motivated fraud (like voter ID fraud at the polling place?) is rare, so why is this example offered?
To set up his argument regarding "unconscious bias that creeps in when everyone thinks the same way."
That would be a problem, but the paragraph I quote here doesn't really make the case. Where he had identified the specific research in the previous example, here he tells us nothing other than the claim that he is discussing a "classic experiment from 1975." This experiment is such a classic that in several efforts at googling I can't find any sign of it. I am not suggesting that the experiment didn't happen, I'm suggesting that without knowing more than what Brooks is willing to reveal, we have no way of judging the validity of the experiment or of the conclusions Brooks would have us draw.
But look at what he says in describing the experiment. He tells us nothing about how "conservative reviewers" in this experiment performed. He tells us nothing about the researchers, about their selection criteria for subjects, about the way that "liberal" or "conservative" was defined either in the grouping of reviewers or in the "paper" itself. AND, there is no mention here about any sort of control group. In the absence of such information, it is impossible to make any judgments whatsoever about his example.
For that matter, there is something deeply wrong in what he tells us of this classic experiment. If the same data were used to support opposite conclusions, then one version of the "paper" should in truth be more convincing than the other. (They could be equally unconvincing, but how are we to know in the absence of any access to the experiment?)
But in the end, Brooks is driving his audience to exactly the phenomenon he is arguing against. Without even the most minimal information required to make a valid judgment regarding this experiment, readers will most likely judge it based upon their own ideological leanings.
Pot, meet kettle.
Subscribe to:
Post Comments (Atom)
1 comment:
Yup. My take isn't much different: http://academeblog.org/2015/10/31/partisan-politics-and-academic-freedom/
Post a Comment