Skip to content

Numeric indicators, research, task corruption#

Brumback (2012) talking about impact of impact factor on research. The implications of simple numeric indicators and what they drive.

Needs unpacking for implications for teaching quality, learning analytics etc.

“Publish or perish” is the time-honored “principle” for academicians who race to accumulate lines under the “publications” section of a curriculum vitae. The original intent of publication—to inform others of findings and further scientific knowledge—has been corrupted by factors including (1) exponential growth of journals and the journal industry, fueled in part by intrusion of the Internet into all aspects of academic life; and (2) adoption of journal metrics (rather than written content) as the measure of scientific quality. The proprietary Thomson Reuters Impact Factor is the most pernicious metric, having caused editors and publishers to change editorial practices to boost the number. At the same time, gullible administrators and government agencies have been persuaded that metrics for the journal in which materials are published can be used as a measure of the worth of individual investigators (and institutions) and their research efforts: simple numbers can be substituted for the burdensome effort required to read and assess research quality. Thus, granting of research funds, awarding of academic rank and tenure, and determination of salaries (including bonus payments) have become tied to manipulable journal metrics rather than the significance or quality of reported research source