Shermer’s tweet and some thoughts on positivism, empiricism, and methodologies

This is a bit of an unwieldy post, but I’m trying to tie a few loose threads together. If you’re reading this, bear with me.

Lately, I’ve been thinking a lot about Michael Shermer’s tweet about the March for Science, the ensuing backlash, and how this relates to some of the quantitative methods in DH and Comp Rhet reading that I’ve been doing. Specifically, I’m thinking about how some scientists position science as somehow outside of ideology, and / or what this can mean for how humanists appropriate scientific (or social scientific) methods and methodologies.

In case anyone who’s reading this hasn’t been following the story, the broad sketch is this. Shermer, a scientist and noted skeptic, tweeted that science exists outside of politics and ideology responding to the idea that the March for Science doesn’t really need a diversity statement, people challenged him about this, and he wrote a blog post explaining why he doesn’t believe that science and ideology / politics can mix, noting that it’s never been a better time for a woman and / or a member of another marginalized group to join the STEM field because it’s a bastion of liberalism and universities are full of people who believe in inclusion. Then, a lot of people clapped back. Here’s a link to Shermer’s initial tweet, and here’s a link to his response to the backlash, and here’s a great response to the response.

Anyway…

Shermer is peddling the same “we need to stick together” bullshit that other folks on the left are peddling, where “sticking together” is defined by not talking or thinking about the ways that the left has failed and continues to fail marginalized folks. The idea that we must only focus on the social progress that has been made and not on what still isn’t working actually delays further progress. It’s also kind of insulting to our collective intelligence. We can admire certain aspects of a certain brand of scientific thinking and also point to the things that it can’t do, or hasn’t done well in the past, or still isn’t doing. We can and should do that now, even when Donald Trump is president and people are rejecting the findings of scientists. In fact, we must do that, because what scientists (and supporters of the unimpeachability of science) have done so far isn’t actually working very well.

I’m particularly annoyed by this idea that science and politics don’t and should never mix because (as Shermer puts it) scientists are primarily liberal proponents of “inclusion,” and that this somehow absolves them from having to think about how to remake institutions so that they’re actually transformed by historically excluded voices instead of merely vaguely tolerant of them (as a best case scenario).

Here’s a paragraph from his response that particularly irks me, when he discusses a public conversation that he had with another scientist after the storm surrounding his initial Tweet:

I asked him [Lawrence Krauss] why people seem to think that science still excludes women and minorities (and others) when, in fact, it is peopled by professors who are almost entirely liberals who fully embrace the principles of inclusion (and the laws regarding affirmative action). Are we to believe that all these liberal academics, when behind closed doors, privately believe that women and minorities can’t cut it in science and so they continue to mostly hire only white men?

Krauss was unequivocal in his response. Absolutely not. There has never been a better time to be a woman in science, he explained, elaborating that at his university, Arizona State University, not only does the student body perfectly reflect the demographics of the state of Arizona, the President of ASU has mandated that if two candidates are equally qualified for a professorship, one a man and the other a woman, the woman should be selected for the job. Full stop.

While it’s great that ASU is thinking about the historical representation of women (though, seemingly, specifically women and not others? I’m not sure if that’s accurate, or if that’s just how Krauss was framing it…) in STEM, let’s examine some of the things that this hiring mandate doesn’t account for.

It doesn’t account for how women will be supported in the actual job.

It doesn’t account for the fact that the existing structures and rules have been built and maintained by (mostly white, mostly middle-class) men who see themselves and their version of logic and rationality reflected in absolutely every place that they look.

It doesn’t account for the fact that if new(er) folks want to keep their job, they need to (continue to) show at least a basic amount of deference to those existing structures and rules, or that graduate school is an excellent training ground for this rather than a place where we can learn how to advocate for transforming these structures.

It doesn’t account for the way that some arguments, or some ways of being in the world, are coded as “rational” and some as “emotional,” and that in order to be considered part of the former (and privileged) category, it might be considered strategically necessary to suppress discussing what feels rigorous, thoughtful, just, or obvious.

It doesn’t account for the kinds of aggressions that women (and others) disproportionately face, and that other people may not even be able to see or might not recognize as aggressions.

So, really, a policy like this is positioned to give the appearance of “inclusion” without the infrastructure to actually support it. This allows “professors who are almost entirely liberals who fully embrace the principles of inclusion” to feel like they’re doing the best job that they can to be inclusive, while also absolving them of the responsibility to make actual change that might threaten their own dominance.

It took me a while to get here, but this brings me to a connection that I see developing between Shermer’s argument and the reading that I’ve been doing about digital humanities and quantitative Comp Rhet methodologies.

I’ve been particularly interested in DH methodology since, as a fairly new-ish field figuring out what it is and positioning itself as somewhat of a humanities / social science / computer science hybrid, these conversations feel very resonant to me. I’m interested in the methodological development of Comp Rhet, and like DH, Comp Rhet has fretted and argued and disagreed about what it is and what kind of work that it does (or doesn’t do) and how to justify that work to other people.

One of the reasons I’m interested in DH aside from that parallel is that so many of the methods conversations seem to focus on keeping the humanities in the digital humanities. And so few of our conversations in Comp Rhet that center around digital methodologies (especially quantitative ones) seem to reject this as a goal.

In Debates in the Digital Humanities, writers like Tanya Clement advocate for a more conscious alignment between the digital humanities and the (non-digital?) humanities rather than (primarily) the social sciences. She doesn’t intend to dissuade DH scholars from using social science methods, but she wants those methods grounded in reflexivity about the limitations of knowledge (both what can be known, and how any specific researcher positioned in multiple, simultaneous discourse communities can know it). Jeff Binder’s work in the same collection also touches on this. He writes about how topic modeling methodologies often fail to examine their own epistemological assumptions about the nature of language, and he warns against digital humanists simply applying a computer science methodology to the study of literary texts without considering this positioning.

Lauren Klein, a historian, is similarly concerned with the way that DH might use findings as a substitute for facts: what is empirical (i.e. observable) is not necessarily true. Findings are filtered, always, through our many lenses. The interpretation of data – even our research questions and what we choose to look for (or what we don’t) — are always already ideological and political.

I appreciate the way that these digital humanists seek to remind fellow practitioners of the value of the humanities within the digital humanities just as much as I appreciate the potential of digital methods (like topic modeling and data and text mining) in rounding out some of our epistemological limits. Breaking down the silos of the university is a necessary project.

More generally, I’m really excited about projects that allow us to consider how university values have been created and maintained by people who don’t think it’s valuable to focus on the way that people other than them experience the world. This is what’s missing from Shermer’s tweet and blog post.

I’m a little troubled, then, by what I’m reading in the push toward quantitative methodologies in Comp Rhet (which sometimes, but do not always, employ methods like the ones that DH use). I’m not arguing that all DH methods are quantitative: many are not. But some are, and for the humanities, this is still not quite as common and, thus, needs to be explained and problematized in the way that the DH scholars I’ve mentioned have done in their work.

Comp Rhet practitioners who are advocates for quantitative methodologies (especially Writing Program Administrators, Writing Center directors, and others who more consciously straddle administrative and faculty identities) often argue that a quantitative literacy is necessary to stem the tide of neoliberal, bureaucratic overreach.

If we can become students of bureaucracy, as Richard E. Miller advocates, then we can more effectively intervene within bureaucratic structures. Susan Lang and Craig Baehr position much of Comp Rhet research tradition as “lore” – anti-empirical work, or work based around something other than a RAD (replicable, aggregable, and data-supported) framework. Richard Haswell, in several places, argues for the reintroduction of quantitative methods, and work by Derrick Mueller and Benjamin Miller show how this work can be done using the methods / methodologies of places like the digital humanities (like topic modeling, or in text / data mining).

I’m really on board with the project of increasing our quantitative literacy in our field. I don’t actually think that the qualitative / quantitative divide makes any sense: we should try to know things in as many ways as possible. We should strive toward what Brian McNely and Christa Teston call “methodological promiscuity,” or mixed methods, even, as Bob Broad notes, as we robustly qualify how our own methodological choices are not only rooted in choosing “the best possible method” depending on the object of study and are, instead, also rooted in our own feelings and beliefs which are not actually separable from our projects.

That said, there’s an anxiety in some of the quantitative work that we should really distance our field from research methods that are not grounded in empiricism. Especially when the reason is that we want our work to be legible (read: acceptable) to people in other fields.

It reminds me of when teachers says that they’re only ripping apart the grammar of a student’s essay to “prepare them” for the next teacher. It feels like a way for the researchers to absolve themselves of feeling as though they don’t believe in anti-empirical projects and to push the responsibility elsewhere. The rationale for adopting quantitative methods cannot and should not be solely because other people won’t believe or care about the work that we’re doing unless we do. This rationale positions us to reject anything else that doesn’t appease those audiences (i.e. discussions about identity and its impact on structures) because it isn’t empirical to them.

Instead, let’s do research because we want to learn stuff. Let’s keep thinking about what is positioned as not worthy of knowing about. Let’s think about how what we know, or what we think we’re seeing, limits what we can know. And let’s marshal all of it.

Let’s approach our work in a million different ways that challenge each other’s findings, because research is big enough for everyone, and because people’s personal experiences matter, and because those experiences are rooted in structures, and because studying human behavior is hard.

,

 OpenCUNY » login | join | terms | activity 

 Supported by the CUNY Doctoral Students Council.  

OpenCUNY.ORGLike @OpenCUNYLike OpenCUNY