PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A review of process metrics in defect prediction studies

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Process metrics appear to be an effective addition to software defect prediction models usually built upon product metrics. We present a review of research studies that investigate process metrics in defect prediction. The following process metrics are discussed: Number of Revisions, Number of Distinct Committers, Number of Modified Lines, Is New and Number of Defects in Previous Revision. We not only introduce the definitions of the aforementioned process metrics but also present the most important results, recent advances and the summary regarding the use of these metrics in software defect prediction models, as well as the taxonomy of the analysed process metrics.
Słowa kluczowe
Rocznik
Tom
Strony
133--145
Opis fizyczny
Bibliogr. 44 poz.
Twórcy
autor
autor
  • Wroclaw University of Technology, Poland
Bibliografia
  • [1] B. Henderson-Sellers. Object-oriented metrics: measures of complexity. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1996.
  • [2] T. Illes-Seifert, B. Paech. Exploring the relationship of a file’s history and its fault-proneness: An empirical method and its application to open source programs. Information and Software Technology, 52(5):539–558, 2010.
  • [3] S. H. Kan. Metrics and Models in Software Quality Engineering. Addison-Wesley, Boston, MA, USA, 2002.
  • [4] L. Madeyski. Test-Driven Development: An Empirical Evaluation of Agile Practice. Springer, (Heidelberg, Dordrecht, London, New York), 2010.
  • [5] R. M. Bell, T. J. Ostrand, E. J. Weyuker. Looking for bugs in all the right places. ISSTA ’06: Proceedings of the 2006 international symposium on Software testing and analysis, pp. 61–72, New York, NY, USA, 2006. ACM.
  • [6] T. J. Ostrand, E. J. Weyuker, R. M. Bell. Where the bugs are. ISSTA ’04: Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 86–96, New York, NY, USA, 2004. ACM.
  • [7] E. J. Weyuker, T. J. Ostrand, R. M. Bell. Adapting a fault prediction model to allow widespread usage. PROMISE’06: Proceedings of the 4th International Workshop on Predictor Models in Software Engineering, pp. 1–5, New York, NY, USA, 2006. ACM.
  • [8] E. J. Weyuker, T. J. Ostrand, R. M. Bell. Using Developer Information as a Factor for Fault Prediction. PROMISE ’07: Proceedings of the Third International Workshop on Predictor Models in Software Engineering, p. 8, Washington, DC, USA, 2007. IEEE Computer Society.
  • [9] E. J. Weyuker, T. J. Ostrand, R. M. Bell. Do too many cooks spoil the broth? Using the number of developers to enhance defect prediction models. Empirical Software Engineering, 13(5):539–559, 2008.
  • [10] J. Ratzinger, M. Pinzger, H. Gall. EQ-Mine: Predicting Short-Term Defects for Software Evolution. M. Dwyer, A. Lopes, editors, Fundamental Approaches to Software Engineering, vol. 4422 of Lecture Notes in Computer Science, pp. 12–26. Springer Berlin / Heidelberg, 2007.
  • [11] T. L. Graves, A. F. Karr, J. S. Marron, H. Siy. Predicting Fault Incidence Using Software Change History. IEEE Transactions on Software Engineering, 26(7):653–661, 2000.
  • [12] A. Schr¨oter, T. Zimmermann, R. Premraj, A. Zeller. If your bug database could talk. In Proceedings of the 5th International Symposium on Empirical Software Engineering, Volume II: Short Papers and Posters, pp. 18–20, 2006.
  • [13] N. Nagappan, T. Ball. Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study. ESEM ’07: Proceedings of the First International Symposium on Empirical Software Engineering and Measurement, pp. 364–373, Washington, DC, USA, 2007. IEEE Computer Society.
  • [14] N. Nagappan, A. Zeller, T. Zimmermann, K. Herzig, B. Murphy. Change Bursts as Defect Predictors. Proceedings of the 2010 IEEE 21st International Symposium on Software Reliability Engineering, ISSRE ’10, pp. 309–318, Washington, DC, USA, 2010. IEEE Computer Society.
  • [15] E. Shihab, Z. M. Jiang, W. M. Ibrahim, B. Adams, A. E. Hassan. Understanding the impact of code and process metrics on post-release defects: a case study on the Eclipse project. ESEM ’10: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 1–10, New York, NY, USA, 2010. ACM.
  • [16] R. Moser, W. Pedrycz, G. Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. ICSE ’08: Proceedings of the 30th International Conference on Software Engineering, pp. 181–190, New York, NY, USA, 2008. ACM.
  • [17] T. Zimmermann, N. Nagappan, H. Gall, E. Giger, B. Murphy. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. ESEC/FSE ’09: Proceedings of the 7th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 91–100, New York, NY, USA, 2009. ACM.
  • [18] S. Matsumoto, Y. Kamei, A. Monden, K. ichi Matsumoto, M. Nakamura. An Analysis of Developer Metrics for Fault Prediction. PROMISE ’10: Proceedings of the Sixth International Conference on Predictor Models in Software Engineering, pp. 18:1–18:9. ACM, 2010.
  • [19] E. J. Weyuker, T. J. Ostrand, R. M. Bell. Programmer–based Fault Prediction. PROMISE ’10: Proceedings of the Sixth International Conference on Predictor Models in Software Engineering, pp. 19:1–19:10. ACM, 2010.
  • [20] N. Nagappan, B. Murphy, V. Basili. The influence of organizational structure on software quality: an empirical case study. ICSE ’08: Proceedings of the 30th International Conference on Software Engineering, pp. 521–530, New York, NY, USA, 2008. ACM.
  • [21] T. M. Khoshgoftaar, E. B. Allen, N. Goel, A. Nandi, J. McMullan. Detection of software modules with high debug code churn in a very large legacy system. Proceedings of the The Seventh International Symposium on Software Reliability Engineering, ISSRE ’96, pp. 364–, Washington, DC, USA, 1996. IEEE Computer Society.
  • [22] L. Layman, G. Kudrjavets, N. Nagappan. Iterative identification of fault-prone binaries using in-process metrics. ESEM ’08: Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 206–212, New York, NY, USA, 2008. ACM.
  • [23] N. Nagappan, T. Ball. Use of relative code churn measures to predict system defect density. ICSE ’05: Proceedings of the 27th International Conference on Software Engineering, pp. 284–292, New York, NY, USA, 2005. ACM.
  • [24] R. Purushothaman, D. E. Perry. Toward Understanding the Rhetoric of Small Source Code Changes. IEEE Transactions on Software Engineering, 31:511–526, 2005.
  • [25] J. Śliwerski, T. Zimmermann, A. Zeller. When do changes induce fixes? MSR ’05: Proceedings of the 2005 International Workshop on Mining Software Repositories, pp. 1–5, New York, NY, USA, 2005. ACM.
  • [26] A. E. Hassan. Predicting faults using the complexity of code changes. ICSE ’09: Proceedings of the 31st International Conference on Software Engineering, pp. 78–88, Washington, DC, USA, 2009. IEEE Computer Society.
  • [27] E. Giger, M. Pinzger, H. C. Gall. Comparing fine-grained source code changes and code churn for bug prediction. Proceedings of the 8th Working Conference on Mining Software Repositories, MSR ’11, pp. 83–92, New York, NY, USA, 2011. ACM.
  • [28] B. Fluri, H. C. Gall. Classifying Change Types for Qualifying Change Couplings. Proceedings of the 14th IEEE International Conference on Program Comprehension, ICPC ’06, pp. 35–45, Washington, DC, USA, 2006. IEEE Computer Society.
  • [29] B. Fluri, M. Wuersch, M. PInzger, H. Gall. Change Distilling: Tree Differencing for Fine-Grained Source Code Change Extraction. IEEE Trans. Softw. Eng., 33(11):725–743, November 2007.
  • [30] E. Giger, M. Pinzger, H. Gall. Using the gini coefficient for bug prediction in eclipse. Proceedings of the 12th International Workshop on Principles of Software Evolution and the 7th annual ERCIM Workshop on Software Evolution, IWPSE-EVOL ’11, pp. 51–55, New York, NY, USA, 2011. ACM.
  • [31] T. J. Ostrand, E. J. Weyuker, R. M. Bell. Predicting the Location and Number of Faults in Large Software Systems. IEEE Transactions on Software Engineering, 31(4):340–355, 2005.
  • [32] T. M. Khoshgoftaar, E. B. Allen, R. Halstead, G. P. Trio, R. M. Flass. Using Process History to Predict Software Quality. Computer, 31(4):66–72, 1998.
  • [33] T. J. Ostrand, E. J.Weyuker. The distribution of faults in a large industrial software system. ISSTA ’02: Proceedings of the 2002 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 55–64, New York, NY, USA, 2002. ACM.
  • [34] D. Wahyudin, A. Schatten, D. Winkler, A. M. Tjoa, S. Biffl. Defect Prediction using Combined Product and Project Metrics - A Case Study from the Open Source ”Apache” MyFaces Project Family. SEAA ’08: Proceedings of the 2008 34th Euromicro Conference Software Engineering and Advanced Applications, pp. 207–215,Washington, DC, USA, 2008. IEEE Computer Society.
  • [35] E. Arisholm, L. C. Briand. Predicting fault-prone components in a java legacy system. ISESE ’06: Proceedings of the 2006 ACM/IEEE International Symposium on Empirical Software Engineering, pp. 8–17, New York, NY, USA, 2006. ACM.
  • [36] S. Kim, T. Zimmermann, E. J. Whitehead Jr., A. Zeller. Predicting Faults from Cached History. ICSE ’07: Proceedings of the 29th International Conference on Software Engineering, pp. 489–498, Washington, DC, USA, 2007. IEEE Computer Society.
  • [37] T. Gyimothy, R. Ferenc, I. Siket. Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction. IEEE Transactions on Software Engineering, 31(10):897–910, 2005.
  • [38] K. Gao, T. M. Khoshgoftaar, H. Wang, N. Seliya. Choosing software metrics for defect prediction: an investigation on feature selection techniques. Software: Practice and Experience, 41(5):579–606, 2011.
  • [39] M. Jureczko. Significance of Different Software Metrics in Defect Prediction. Software Engineering: An International Journal, 1(1):86–95, 2011.
  • [40] R. Bell, T. Ostrand, E.Weyuker. The limited impact of individual developer data on software defect prediction. Empirical Software Engineering, pp. 1–28, 2011. 10.1007/s10664-011-9178-4.
  • [41] R. M. Bell, T. J. Ostrand, E. J. Weyuker. Does measuring code change improve fault prediction? Proceedings of the 7th International Conference on Predictive Models in Software Engineering, Promise ’11, pp. 2:1–2:8, New York, NY, USA, 2011. ACM.
  • [42] Y. Shin, A. Meneely, L. Williams, J. Osborne. Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities. Software Engineering, IEEE Transactions on, 37(6):772 –787, nov.-dec. 2011.
  • [43] B. Sisman, A. C. Kak. Incorporating version histories in Information Retrieval based bug localization. Mining Software Repositories (MSR), 2012 9th IEEE Working Conference on, pp. 50–59, june 2012.
  • [44] S. Krishnan, C. Strasburg, R. R. Lutz, K. Goˇseva-Popstojanova. Are change metrics good predictors for an evolving software product line? Proceedings of the 7th International Conference on Predictive Models in Software Engineering, Promise ’11, pp. 7:1–7:10, New York, NY, USA, 2011. ACM.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPS3-0025-0098
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.