Skip to main content
Log in

An instrument for measuring the success of the requirements engineering process in information systems development

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

There exists a strong motivation for evaluating, understanding, and improving requirements engineering practices given that a successful requirements engineering process is necessary for a successful software system. Measuring requirements engineering success is central to evaluation, understanding, and improving these practices. In this paper, a research study whose objective was to develop an instrument to measure the success of the requirements engineering process is described. The domain of this study is developing customer-specific business information systems. The main result is a subjective instrument for measuring requirements engineering success. The instrument consists of 32 indicators that cover the two most important dimensions of requirements engineering success. These two dimensions were identified during the study to be: quality of requirements engineering products and quality of requirements engineering service. Evidence is presented demonstrating that the instrument has desirable psychometric properties, such as high reliability and good validity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adams, D., Nelson, R., and Todd, P. 1992. Perceived usefulness, ease of use, and usage of information technology: A replication. MIS Quarterly, 227–247.

  • Aldenderfer, M., and Blashfield, R. 1984. Cluster Analysis. Sage Publications.

  • Bailey J., and Pearson S. 1983. Development of a tool for measuring and analyzing computer user satisfaction. Management Science 29(5): 530–545.

    Google Scholar 

  • Basili V., and Perricone B. 1984. Software errors and complexity: An empirical investigation. Communications of the ACM 27(1): 42–52.

    Google Scholar 

  • Basili, V., and Weiss, D. 1981. Evaluation of a software requirements document by analysis of change data. Proceedings of the Fifth International Conference on Software Engineering, 314–323.

  • Bell, T., and Thayer, T. 1976. Software requirements: Are they really a problem? Proceedings of the Second International Conference on Software Engineering, 61–68.

  • Boehm, B. 1984. Verifying and validating software requirements and design specifications. IEEE Software, 75–88.

  • Briand, L., El Emam, K., and Morasca, S. 1995. Theoretical and empirical validation of software product measures. International Software Engineering Research Network, Technical report ISERN-95-03.

  • Carmines, E., and Zeller, R. 1979. Reliability and Validity Assessment. Sage Publications.

  • Cohen, J., and Cohen, P. 1983. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Lawrence Erlbaum Associates.

  • Cordes D., and Carver D. 1989. Evaluation method for user requirements documents. Information and Software Technology 31(4): 181–188.

    Google Scholar 

  • Cronbach L. 1951. Coefficient alpha and the internal structure of tests. Psychometrika, 297–334, September.

  • Cronbach, L. 1971. Test validation. Educational Measurement (Thorndike, R., ed.), American Council on Education.

  • Curtis, B. 1981. Experimental evaluation of software characteristics. Software Metrics: An Analysis And Evaluation (Perlis, A., Sayward, F., and Shaw, M., eds.), MIT Press.

  • Curtis B., Krasner H., and Iscoe N. 1988. A field study of the software design process for large systems. Communications of the ACM 31(11): 1268–1286.

    Google Scholar 

  • Daly E. 1977. Management of software development. IEEE Transactions on Software Engineering SE-3(3): 229–242.

    Google Scholar 

  • Davis G. 1982. Strategies for information requirements determination. IBM Systems Journal 21(1) 4–31.

    Google Scholar 

  • Davis A. 1988. A comparison of techniques for the specification of external system behavior. Communications of the ACM 31(9): 1098–1115.

    Google Scholar 

  • Davis J. 1989. Identification of errors in software requirements through use of automated requirements tools. Information and Software Technology 31(9): 472–476.

    Google Scholar 

  • Davis F. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13(3): 319–340.

    Google Scholar 

  • Davis, A. 1993. Software Requirements: Objects, Functions, and States. Prentice Hall.

  • Dawson, J. 1991. Toronto laboratory requirements process reference guide. Technical Report (Unpublished), IBM Canada Laboratory.

  • Dubin, R. 1969. Theory Building. Free Press.

  • El Emam K., Quintin S., and Madhavji N. H. 1996. User participation in the requirements engineering process: An empirical study. Requirements Engineering Journal 1: 4–26.

    Google Scholar 

  • El Emam, K. and Madhavji, N. H. 1995. Measuring the success of requirements engineering processes. Proceedings of the Second IEEE International Syposium on Requirements Engineering, 204–211.

  • Endres A. 1975. An analysis of errors and their causes in system programs. IEEE Transactions on Software Engineering SE-1(2): 140–149.

    Google Scholar 

  • Farbey B. 1990. Software quality metrics: Considerations about requirements and requirements specifications. Information and Software Technology 32(1): 60–64.

    Google Scholar 

  • Glaser, B., and Strauss, A. 1967. The Discovery of Grounded Theory. Aldine Publishing Company.

  • Heise, D., and Bohrnstedt, G. 1970. Validity, invalidity, and reliability. Sociological Methodology, (Borgatta, E., and Bohrnstedt, G., eds.), Jossey Bass, pp. 104–129.

  • Ives B., Olson M., and Baroudi J. 1983. The measurement of user information satisfaction. Communications of the ACM 26(10): 785–793.

    Google Scholar 

  • Keen P. 1988. Information systems and organizational change. Communication of the ACM 24(1): 24–33.

    Google Scholar 

  • Keen, P., and Gerson, E. 1977. The politics of software systems design. Datamation November: 80–84.

  • Kerlinger, F. 1986. Foundations of Behavioral Research. Holt, Rinehart, and Winston.

  • Leonard-Barton, D., and Kraus, W. 1985. Implementing new technology. Harvard Business Review November–December: 102–110.

  • Lubars, M., Potts, C., and Richter, C. 1993. A review of the state of the practice in requirements modeling. Proceedings of the IEEE International Syposium on Requirements Engineering January: 2–14.

  • Marshall, C., and Rossman, G. 1989. Designing Qualitative Research. Sage Publications.

  • Martin J., and Tsai W. 1990. N-fold inspection: A requirements analysis technique. Communications of the ACM 33(2): 225–232.

    Google Scholar 

  • Miller G. 1969. A psychological method to investigate verbal concepts. Journal of Mathematical Psychology 6: 169–191.

    Google Scholar 

  • Naumann, J., and Davis, G. 1978. A contingency theory to select an information requirements determination methodology. Proceedings of the 2nd Software Life Cycle Management Workshop, 63–65.

  • Naumann J. D., Davis G. B., and McKeen J. D. 1980. Determining information requirements: A contingency method for selection of a requirements assurance strategy. Journal of Systems and Software 1: 273–281.

    Google Scholar 

  • Nunally, J. 1967. Psychometric Theory. McGraw Hill.

  • Osgood, C., Suci, G., and Tannenbaum, P. 1967. The Measurement of Meaning. University of Illinois Press.

  • Page, E. 1963. Ordered hypotheses for multiple treatments: A significance test for linear ranks. American Statistical Association Journal, 216–230, March.

  • Randolph, W., and Posner, B. 1988. What every manager needs to know about project management. Sloan Management Review Summer: 65–73.

  • Rockart, J. 1988. The line takes the leadership—IS management in a wired society. Sloan Management Review Summer: 57–64.

  • Rockart J., and Morton M. S. 1984. Implications of changes in information technology for corporate strategy. Interfaces 14(1): 84–95.

    Google Scholar 

  • Sethi V., and King W. 1991. Construct measurement in information systems research: An illustration in strategic systems. Decision Sciences 22: 455–472.

    Google Scholar 

  • Software Engineering Laboratory. 1991. Software Engineering Laboratory (SEL) relationships, models, and management rules. Technical Report SEL-91-001, NASA Goddard Space Flight Center, February.

  • Straub, D. 1989. Validating instruments in MIS research. MIS Quarterly June: 147–169.

  • Subramanian A., and Nilakanta S. 1994. Measurement: A blueprint for theory-building in MIS. Information and Management 26: 13–20.

    Google Scholar 

  • Weiss D. 1979. Evaluating software development by error analysis: The data from the architecture research facility. Journal of Systems and Software 1: 57–70.

    Google Scholar 

  • Weiss D., and Basili V. 1985. Evaluating software development by analysis of changes: Some data from the Software Engineering Laboratory. IEEE Transactions on Software Engineering SE-11(2): 157–168.

    Google Scholar 

  • Zagorsky, C. 1990. Case study: Managing the change to CASE. Journal of Information Systems Management Summer: 24–32.

Download references

Author information

Authors and Affiliations

Authors

Additional information

This paper is a longer and more detailed version of the study reported in El Emam and Madhavji (1995).

This work was supported in part by the IT Macroscope Project and NSERC Canada.

Rights and permissions

Reprints and permissions

About this article

Cite this article

El Emam, K., Madhavji, N.H. An instrument for measuring the success of the requirements engineering process in information systems development. Empirical Software Engineering 1, 201–240 (1996). https://doi.org/10.1007/BF00127446

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00127446

Keywords

Navigation