“Fake News” is a symptom. Not a disease.

In this #alternativefacts landscape, it feels almost heretical to take any side other than STRONGLY OPPOSE when it comes to the discussion surrounding fake news. As in, “fake news” bad, “real news” good.

To be clear, fake news and alternative facts pose real problems. We should address them in our classrooms. And yet, the entire debate around the issue feels like a symptom of a larger problem.

As a rhetoric teacher, some of my job revolves around getting students to consider the validity of information that they’re planning to incorporate in their work and the effect of that information on their intended audience. Students need to consider the provenance of a source, the agenda that the maker of that information had when crafting it (assuming, as I do, that none of us can be truly agenda-less, or even fully aware of what our agendas are), the source’s timeliness, and the likely audience that the source is trying to reach.

There are millions of resources that can help with this work: videos, library guides, textbook chapters, articles, classroom activities, professional development workshops, clever heuristics that use funny acronyms. I feel fully supported in helping students to choose between what my own discourse community would consider “valid” or “fake” sources. That has also been an important practice in my own life: being able to sort through and evaluate truth claims in an information-dense world makes me feel a little less anxious.

And yet, I think there’s much less of an emphasis on helping students to navigate the embedded logics of institutions like colleges and universities. For me, the salient questions are “How does knowledge get produced and certified?” and “Who certifies it?” and “What gets to ‘count’ as knowledge? What doesn’t? Why not?” rather than “What is ‘fake’ or ‘real’?”

Or, even trickier questions: who has historically benefited, and who has historically not, from the institutional logics that govern knowledge production? What kinds of information gets excised from the record for appearing unserious or inadequately rigorous? Tricia M. Kress’s chapter, “Can’t You Just Know? Critical Research as Praxis” points to this dilemma. In it, she compares the articulated lived experience of a young student of color to the critical work of a university-certified “expert.” They’re identical conclusions with a notable exception: one has clout, and the other doesn’t.

I’ll admit that my own pedagogical practice currently skews toward teaching students how to find and incorporate “valid” research. But in the past few semesters, I’ve had interesting and generative conversations in my classroom that have caused me to want to reconsider this.

The most enriching conversations have generally involved student push back. Why can’t the Bible be used as a “credible” source to determine when life begins, asked a Catholic student. How can we be sure, in conducting an ethnography, that we’re ethically representing a community with which we have only a limited experience, asked a white first-year investigating a subreddit for a WAC class that I taught last spring. Why doesn’t our lived experience count as research, but someone else’s lived experience of us can count, asked a student of color.

These are important questions. But the tools that I’ve mentioned above don’t really help students to grapple with them. Telling the first student that the source of authority cannot be verified rings hollow to her experience, but inviting her into a conversation about how authority is determined seems like a much more generative avenue. In the case of the second and third student, I’ve grappled with some of these very questions, but mostly in graduate school — once I’ve shown sufficient deference to institutional logics. Not as a first-year.

My question is this: why should students have to wait until then?  Why shouldn’t we interrogate our own knowledge production / certification with students, and be honest about the fact that our institutions are flawed, complex social constructs made up of human beings with histories of discrimination and exclusion based on race, gender, sexuality, religion, colonialism and empire building (not an exhaustive list). We need to communicate that our institutions have been slow to change (or to even recognize problems), and that we remain adamant that they do so while we also remain surrounded by evidence to the contrary that they can or will. If we do this, rather than presenting institutions as perfectly reliable and seamlessly objective Fact Factories, wouldn’t we stand a better chance at gaining the trust of the public?

I want students to develop a greater appreciation for the processes that we use, as well as a healthy and critical skepticism of some of those processes. Maybe then, they wouldn’t feel like we’re demanding their fidelity so much as we’re inviting them to help us shape even better processes. Maybe this will encourage students to look at the way that other institutions produce and certify knowledge.

Is this an unrealistically optimistic hope to have in our post-fact reality?

Uncategorized

 OpenCUNY » login | join | terms | activity 

 Supported by the CUNY Doctoral Students Council.  

OpenCUNY.ORGLike @OpenCUNYLike OpenCUNY