By Dr Martins Bruveris, Lecturer in Mathematics
Sporting achievement is rated by finishing position, best times, goals and ultimately wins. Scientific achievement - success in research - is ranked using metrics that measure esteem: How many citations do I have? What is my H-index? How prestigious are the journals in which I publish? How many grants have I obtained? How soon have I been promoted?
Should this be a problem? In mathematics, for instance, there are essential activities that markers of esteem do not capture. The most important is reviewing papers. Ideally, a paper submitted to a journal is reviewed by one or two other mathematicians, who read it in detail and check the proofs are correct.
Reading a mathematical paper is hard work and takes time. And each hour spent reviewing a paper is time not spent writing your own. Often enough, reviewers do not take the necessary time to read a paper, which in the prevailing publish-or-perish atmosphere, means we spend less time polishing and proofreading our own papers and also less time reviewing others’ papers. I think that because of this the overall quality of research papers is diminishing.
The dogged hunt for esteem leads to the search for the magical least publishable unit. Starting out, scientists are motivated by the pursuit of knowledge, the desire to answer unanswered questions. When the questions turn out to be difficult is when maths gets interesting, where research becomes exciting. But it also means I spend time “unproductively,” because I am not writing a paper. Half a year spent working on a problem is half a year not spent writing papers. So it’s tempting to chip off a small sub-problem that I can solve and write a paper about it. Then perhaps chip off another sub-problem. And if after some chipping, the main problem is still too big, there are always other chippable problems to be found.
Measuring mathematics in terms of esteem means that when discussing other mathematicians, we stop asking the questions: What is he or she researching? What result has he or she proved? Instead we are asking the other kind of question: How many papers in the Annals of Mathematics or Inventiones Mathematicae have they published? How many NSF (National Science Foundation) or EPSRC (Engineering and Physical Sciences Research Council) grants do they have? It is because the latter kind of questions are easier to answer. They don’t force us to think about actual mathematics or to make judgements about whether a given sub-discipline is important or what the point of a theorem is. It even gives us the illusion that we can compare someone working on analysis of PDEs (Partial Differential Equations) with someone doing algebraic topology without having to know much about either area.
Having said this, how robust is the scientific process if we treat science as a sport instead of pursuing it to increase our knowledge? It is a difficult question, because we all are pushed in this direction to some extent. In practice, academic hiring and promotion is tied to markers of esteem: citations, publications and grants. And so the more appropriate question is: How much should we swim against the tide? How much time do we spend doing what is important for the community, for students and for mathematics but will not be measured in numbers? This encompasses many things: writing research monographs, developing high quality teaching materials, reading other research papers in detail. I don’t have an answer to this question, but there are hints—studies in psychology that cannot be reproduced, or in debates about foundational work in symplectig geometry —that point to cracks in the facade of science.
Martins Bruveris is a lecturer in mathematics at . This piece was originally published on Martins' blog, ?