Skip to main content
Log in

The perils of peer review in economics and other sciences

  • Published:
Journal of Evolutionary Economics Aims and scope Submit manuscript

Abstract

The quality of researchers' work in economics and other sciences is generally evaluated through a system of peer review. In an experimental test it is shown that the peer review system can be very inefficient by creating a bias towards incremental development of existing methods and against exploration of new methods. Previous studies on this issue have put the blame on biases in individual judgement. Here the inefficiency is shown to occur even when researchers are rational and have perfect information as a result of strategic uncertainty about the extent to which other referees reject new methods. The experiment also shows that the bias generated by peer review can be alleviated by shifting some quality evaluation to non-researchers, even if these are poor at discerning quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Banerjee AV (1992) A simple model of herd behavior. J Econ 3: 799–817

    Google Scholar 

  • Blank R (1991) The effects of double-blind versus single blind reviewing: Experimental evidence from the American Economic Review. Am Econ Rev 81: 1041–1067

    Google Scholar 

  • Blaug M (1980) The methodology of economics or how economists explain. Cambridge University Press, Cambridge

    Google Scholar 

  • Ceci SJ, Peters DP (1982) Peer review practices of psychological journals: The fate of published articles, submitted again. Behav Brain Sci 5: 187–252

    Google Scholar 

  • Colander D, Coats AW (1989) The spread of economic ideas. Cambridge University Press, Cambridge

    Google Scholar 

  • Crawford VP, Haller H (1990) Learning how to cooperate: optimal play in repeated coordination games. Econometrica 58: 571–595

    Google Scholar 

  • Kuhn TS (1962) The structure of scientific revolutions. University of Chicago Press, Chicago

    Google Scholar 

  • Lakatos I (1978) The methodology of scientific research programmes. Worrall J, Currie G (eds) vols 1,2. Philosophical papers, Cambridge University Press, Cambridge

    Google Scholar 

  • Mahoney MJ (1977) Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Ther Res 1: 161–475

    Google Scholar 

  • OECD (1987) Evaluation of research: A selection of current practices. Paris

  • Popper K (1959) The logic of scientific discovery. Harper Torchbooks, New York

    Google Scholar 

  • Schelling T (1990) The strategy of conflict. Harvard University Press, Cambridge

    Google Scholar 

  • Sherif M, Sherif C (1969) Social psychology. Harper & Row, New York, pp 208–209

    Google Scholar 

  • Tsukahara S, Yamada K (1982) A note on the time lag between the life cycle of a discipline and resource allocation in Japan. Res Policy: 113–140

  • Tversky A, Kahneman D (1973) Cognitive Psychology 5: 207–232

    Google Scholar 

  • Van Huyck J, Battalio R, Beil R (1990) Tacit coordination games, strategic uncertainty, and coordination failure. Am Econ Rev 80: 235–248

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fölster, S. The perils of peer review in economics and other sciences. J Evol Econ 5, 43–57 (1995). https://doi.org/10.1007/BF01199669

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01199669

Key words

JEL-classification

Navigation