Conference Investigates Metrics and Misconduct

By Tory Brykalski - On February 4 and 5, 2016, the Innovating Communication in Scholarship project hosted a conference entitled “Gaming Metrics: Innovation and Surveillance in Academic Misconduct.” Presenters from across the U.S. and Europe—and from fields as diverse as anthropology, informatics and computing, biology, and economics—explored whether new metrics-based evaluation processes may be creating incentives for new forms of academic misconduct.

In his opening remarks, Mario Biagioli, Distinguished Professor of Law and Science and Technology Studies, and director of the Center for Science and Innovation Studies, explained that traditional policies of misconduct have been rooted in efforts to discern and isolate truth from falsehood. To what extent, Biagioli asked, do new metrics-based forms of evaluation necessitate a re-definition of what “misconduct” means? 

Goal displacement

Alex Csiszar, an assistant professor of the history of science at Harvard University, launched the first panel by describing a letter written to Eugene Garfield (one of the founders of citation analysis and bibliometrics) by Robert Merton (organizer of a 1974 conference entitled “Towards a Metric of Science”). In that letter, Merton wrote:

“Watch out for goal displacement. Whenever an indicator comes to be used in a reward system […] there develops tendencies to manipulate the indicator so that it no longer indicates what it once did. Now that more and more scientists are becoming increasingly aware of citations as a form of behavior, some will begin in semi-organized fashion to work out citation arrangements in their own specialties."

Merton had developed this notion of “goal displacement” in his work on bureaucratic structure in the 1940s, and went on to apply it to his research on the sociology of science in the 1950s and beyond. Scientific publishing practices—“publication for its own sake”—would, he believed, allow certain publications to make it through review, irrespective of content.

In light of Merton’s work on goal displacement, Csiszar argued that the need to develop scientific metrics began in earnest only when science became a bureaucratic exercise. This transformation, he said, became visible immediately after WWII, when the government began to increase state funding for the sciences, compelling scientists to make their work legible to the state.

In order to bureaucratize their research, Csiszar said, those scientists had to legitimate a particular social structure of metrics. Such metrics would eventually “displace” the goals of academic research and become “ends” in themselves. In some of Merton’s publications, for example, he suggested that the development of citation analysis “may lead to citation practices that will in due course invalidate them as indicators of the quality of research.”

Controlling scientific research

The second panelist was Paul Wouters, professor of scientometrics at Leiden University. Wouters revealed how the criteria of scientific quality have been changed by citation analysis. Research, he observed, is a business in which permanent communication is crucial. Scholars must be visible if they want to survive—a situation that inevitably leads to particular kinds of thinking and research.

Increased integration between professional or technical communities and industry or the state means that these communities are no longer autonomous. What’s more, though it is a given that “quality, excellence, and impact” are crucial for research success, the question remains: quality, excellence, and impact to—or for—whom? And while qualitative and quantitative assessments yield different (and either more or less neutral, biased, or systematic) results, in practice, peer review and indicator-based assessment are intertwined. “There are no innocent citation practices,” Wouters said, “and no innocent peer review.”

These observations led Wouters to explore the foundations of the “norms of science.” Those foundations, he said, no longer reside in the traditional scientific community. As a case in point, Wouters offered the Leiden Manifesto. In this paper, Wouters and his co-authors offer ten best practice principles in metrics-based assessment, hoping that “researchers can hold evaluators to account, and evaluators can hold their indicators to account.” But even these principles, Wouters explained, rely on scientific norms—on categories like “appropriate” and “responsible.”

Concluding, he asked: Who is in control? Who dominates the research agenda and steers the money that flows into research? Only by answering these questions, Wouter said, can we determine the extent to which the “quality measurement systems that construct quality” also interfere with the knowledge we want to produce.

Misconduct as resistance

The panel was concluded by Karen Levy, a postdoctoral fellow in New York University’s Department of Media, Culture, and Communications. Levy suggested that examining low-wage laborers and their resistance of surveillance may help us to rethink misconduct and gaming in an academic context.

Truckers, for example—like academics—have a tradition of workplace autonomy. But recent shifts in the surveillance and measurement of truckers mean that the federal government now requires them to measure and report hours spent on the road. Trucking companies have also installed fleet management systems to measure and score drivers across a range of metrics, including speed and fuel efficiency.

Resisting this kind of surveillance, drivers may damage or hack monitoring equipment, or simply exploit the system’s limitations. While this “misconduct” arguably undermines safety and efficiency, Levy suggested that these modes of “resistance” reveal that truckers are already engaged in a form of relational negotiation with their employers and their metrics.

Levy drew a parallel between these changes within the trucking industry and the emergence of metrics in academia. What, she asked, might academic metrics teach us about the surveillance of lower class labor? How might resisting quantification in our publishing practices mobilize us to do something about working class surveillance more broadly? What forms of working class solidarity emerge when we replace the term misconduct with resistance?

Learn more about the Innovating Communication in Scholarship Project (ICIS), or view a full program of events from the Gaming Metrics conference.