Skip to main content
Log in

The impact of biases on simulation-based risk aggregation: modeling cognitive influences on risk assessment

  • Original Paper
  • Published:
Journal of Management Control Aims and scope Submit manuscript

Abstract

This paper develops a systematic approach to quantifying the effect of judgmental biases on aggregate risk measures. Starting with the standard risk management process, we derive the areas that require expert judgment as input in order to aggregate risk into risk measures such as Earnings at Risk. We specify three possible gateways for biases and identify several psychological theories to quantify deviations of expert judgments from objective probabilities. The impact of these cognitive biases on the aggregate risk measure is investigated via Monte Carlo simulation experiments. Through experimental design, we can determine the size of both the average and the possible interaction effects of the different biases. The results show that aggregate risk is systematically underestimated if it is based on biased subjective judgment. Moreover, the existence of interaction effects indicates potential problems of simple debiasing strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adams, J. K., & Adams, P. A. (1961). Realism of confidence judgments. Psychological Review, 68(1), 33–45.

    Article  Google Scholar 

  • Alexander, C., & Sheedy, E. (2008). Developing a stress testing framework based on market risk models. Journal of Banking & Finance, 32(10), 2220–2236.

    Article  Google Scholar 

  • Antony, J. (2003). Design of experiments for engineers and scientists. Amsterdam: Butterworth-Heinemann.

    Google Scholar 

  • Aragonés, J. R., Blanco, C., & Dowd, K. (2001). Incorporating stress tests into market risk modeling. Derivatives Quarterly, 7(3), 44–49.

    Google Scholar 

  • Ayyub, B. M. (2003). Risk analysis in engineering and economics. Boca Raton: Chapman & Hall/CRC.

    Book  Google Scholar 

  • Baron, J. (2008). Thinking and deciding (4th ed.). Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • Basel Committee on Banking Supervision (2009). Principles for sound stress testing practices and supervision: [final paper]. Bank for International Settlements, Basel. http://www.bis.org/publ/bcbs147.pdf?noframes=1. Accessed 28 April 2010.

  • Bedford, T., & Cooke, R. (2001). Probabilistic risk analysis: foundations and methods. Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • Berkowitz, J. (1999). A coherent framework for stress-testing. Board of Governors of the Federal Reserve, Washington, 1–14.

  • Brenner, L. A. (2003). A random support model of the calibration of subjective probabilities. Organizational Behavior and Human Decision Processes, 90(1), 87–110.

    Article  Google Scholar 

  • Brenner, L. A., Griffin, D., & Koehler, D. J. (2005). Modeling patterns of probability calibration with random support theory: diagnosing case-based judgment. Organizational Behavior and Human Decision Processes, 97(1), 64–81.

    Article  Google Scholar 

  • Brenner, L. A., Koehler, D. J., & Rottenstreich, Y. (2008). Remarks on support theory: recent advances and future directions. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: the psychology of intuitive judgment (pp. 489–509). Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • Bunn, D. W. (1980). On the calibration of continuous subjective probability distributions. R&D Management, 10(2), 87–90.

    Article  Google Scholar 

  • Clemen, R. T., & Lichtendahl, K. C. Jr. (2002). Debiasing expert overconfidence: a Bayesian calibration model. In Conference proceedings of the 6th international conference on probabilistic safety assessment and management, pp. 1–16.

    Google Scholar 

  • Cooke, R. M. (1991). Experts in uncertainty: opinion and subjective probability in science. New York: Oxford Univ. Press.

    Google Scholar 

  • COSO (2004). Enterprise risk management—integrated framework: executive summary. http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf. Accessed 14 June 2011.

  • Dowd, K. (2003). Beyond value at risk: the new science of risk management. Chichester: Wiley.

    Google Scholar 

  • Gilbert, N., & Troitzsch, K. G. (2005). Simulation for the social scientist (2nd ed.). Maidenhead: Open Univ. Press.

    Google Scholar 

  • Griffin, D., & Brenner, L. A. (2005). Perspectives on probability judgment calibration. In D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 177–199). Oxford: Blackwell.

    Google Scholar 

  • Griffin, D., & Tversky, A. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24(3), 411–435.

    Article  Google Scholar 

  • Griffin, D., Gonzalez, R., & Varey, C. (2001). The heuristics and biases approach to judgment under uncertainty. In A. Tesser & N. Schwarz (Eds.), Blackwell handbook of social psychology: intra individual processes (pp. 207–235). Malden: Blackwell.

    Google Scholar 

  • Institut der Wirtschaftsprüfer [IDW] (1999). IDW Prüfungsstandard: Die Prüfung des Risikofrüherkennungssystems nach § 317 Abs. 4 HGB (IDW PS 340). WPg, 52(16), 658–662.

    Google Scholar 

  • Institut der Wirtschaftsprüfer [IDW] (2006). WP Handbuch 2006. Düsseldorf: IDW Verlag GmbH.

    Google Scholar 

  • Jennings, D. L., Amabile, T. M., & Ross, L. (1982). Informal covariation assessment: data-based versus theory-based judgments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: heuristics and biases (23rd ed., pp. 211–230). Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • Jorion, P. (2007). Value at risk: the new benchmark for managing financial risk (3rd ed.). New York: McGraw-Hill.

    Google Scholar 

  • Kahneman, D., & Tversky, A. (1972). Subjective probability: a judgment of representativeness. Cognitive Psychology, 3(3), 430–454.

    Article  Google Scholar 

  • Kelton, D. W., & Barton, R. R. (2003). Experimental design for simulation. In Proceedings of the 2003 winter simulation conference (pp. 59–65).

    Google Scholar 

  • Keppel, G., & Wickens, T. D. (2004). Design and analysis: a researcher’s handbook (4th ed.). Upper Saddle River: Pearson Prentice Hall.

    Google Scholar 

  • Keren, G. (1991). Calibration and probability judgments: conceptual and methodological issues. Acta Psychologica, 77(3), 217–273.

    Article  Google Scholar 

  • Klayman, J., González-Vallejo, C., & Barlas, S. (1999). Overconfidence: it depends on how, what, and whom you ask. Organizational Behavior and Human Decision Processes, 79(3), 216–247.

    Article  Google Scholar 

  • Kleijnen, J. P. C. (2008). Design and analysis of simulation experiments. Boston: Springer.

    Google Scholar 

  • Koehler, D. J., Brenner, L. A., & Griffin, D. (2008). The calibration of expert judgment: heuristics and biases beyond the laboratory. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: the psychology of intuitive judgment (pp. 686–715). Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • Kupiec, P. (2002). Stress testing in a value at risk framework. In M. A. H. Dempster (Ed.), Risk management: value at risk and beyond (pp. 76–99). Cambridge: Cambridge Univ. Press.

    Chapter  Google Scholar 

  • Law, A. M. (2007). Simulation modeling and analysis (4th ed.). Boston: McGraw-Hill.

    Google Scholar 

  • Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: the state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: heuristics and biases (23rd ed., pp. 306–334). Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • Lorscheid, I., Heine, B., & Meyer, M. (2011). Opening the ‘black box’ of simulations: increased transparency of simulation models and effective results reports through the systematic design of experiments. CMOT, forthcoming.

  • McClelland, A. G. R., & Bolger, F. (1994). The calibration of subjective probabilities: theories and models 1980–94. In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 453–482). Chichester: Wiley.

    Google Scholar 

  • Meyer, M. A., & Booker, J. M. (1991). Eliciting and analyzing expert judgment: a practical guide. London: Academic Press.

    Google Scholar 

  • Montgomery, D. C. (2005). Design and analysis of experiments (6th ed.). Hoboken: Wiley.

    Google Scholar 

  • Murphy, A. H., & Winkler, R. L. (1984). Probability forecasting in meteorology. Journal of the American Statistical Association, 79(387), 489–500.

    Article  Google Scholar 

  • Nocco, B. W., & Stulz, R. M. (2006). Enterprise risk management: theory and practice. Journal of Applied Corporate Finance, 8(4), 8–20.

    Article  Google Scholar 

  • Rottenstreich, Y., & Tversky, A. (1997). Unpacking, repacking, and anchoring: advances in support theory. Psychological Review, 104(2), 406–415.

    Article  Google Scholar 

  • Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

    Article  Google Scholar 

  • SOX (2002). Sarbanes-Oxley act of 2002 (Enrolled Bill [Final as Passed Both House and Senate]—ENR), The Library of Congress. http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.3763.ENR. Accessed 11 April 2011.

  • Spetzler, C. S., & Staehl von Holstein, C.-A. S. (1975). Probability encoding in decision analysis. Management Science, 22(3), 340–358.

    Article  Google Scholar 

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases: biases in judgments reveal some heuristics of thinking under uncertainty. Science, 185(4157), 1124–1131.

    Article  Google Scholar 

  • Tversky, A., & Koehler, D. J. (1994). Support theory: a nonextensional representation of subjective probability. Psychological Review, 101(4), 547–567.

    Article  Google Scholar 

  • Viemann, K. (2005). Risikoadjustierte Performancemaße. Zeitschrift für Planung und Unternehmenssteuerung, 16(3), 373–380.

    Article  Google Scholar 

  • Vose, D. (1996). Quantitative risk analysis: a guide to Monte Carlo simulation modelling. Chichester: Wiley.

    Google Scholar 

  • Vose, D. (2008). Risk analysis: a quantitative guide (3rd ed.). Chichester: Wiley.

    Google Scholar 

  • Wallsten, T. S., & Budescu, D. V. (1983). Encoding subjective probabilities: a psychological and psychometric review. Management Science, 29(2), 151–173.

    Article  Google Scholar 

  • Wieske, D., & Van der Meer, R. (2006). Monte Carlo simulations and corporate risk management in Germany. http://api.ning.com/files/og1Ma9CrBKYp6Mt-3cQ2LZvZEjnTYEKkQOb3dFu7nwbkgIMriyO*NH2rCvwbfYwuBcx6pUdL*IiiYhdlDWqMQfJrKnowuNB2/MonteCarloandearningsatrisk.pdf. Accessed 14 June 2011.

  • Winkler, R. L., & Murphy, A. H. (1968). Good probability assessors. Journal of Applied Meteorology, 7(5), 751–758.

    Article  Google Scholar 

  • Yates, J. F. (1990). Judgment and decision making. Englewood Cliffs: Prentice Hall.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Meyer.

Additional information

Earlier versions of the paper benefited from discussions at the European Risk Conferences in Milan, London, and Nottingham, the Annual Conference for Management Accounting Research in Vallendar, and the European Accounting Association Annual Congress in Rome.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Meyer, M., Grisar, C. & Kuhnert, F. The impact of biases on simulation-based risk aggregation: modeling cognitive influences on risk assessment. J Manag Control 22, 79–105 (2011). https://doi.org/10.1007/s00187-011-0127-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00187-011-0127-6

Keywords

Navigation