PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Cross–Project Defect Prediction With Respect To Code Ownership Model: An Empirical Study

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The paper presents an analysis of 83 versions of industrial, open-source and academic projects. We have empirically evaluated whether those project types constitute separate classes of projects with regard to defect prediction. Statistical tests proved that there exist significant differences between the models trained on the aforementioned project classes. This work makes the next step towards cross-project reusability of defect prediction models and facilitates their adoption, which has been very limited so far.
Rocznik
Strony
21--35
Opis fizyczny
Bibliogr. 47 poz., tab.
Twórcy
autor
  • Institute of Computer Engineering, Control and Robotics, Wroclaw Univeristy of Technology
autor
  • Faculty of Computer Science and Management, Wroclaw University of Technology
Bibliografia
  • [1] L. Briand, W. Melo, and J. Wust, “Assessing the applicability of fault-proneness models across object-oriented software projects,” IEEE Transactions on Software Engineering, Vol. 28, No. 7, 2002, pp. 706–720.
  • [2] L. Samuelis, “On principles of software engineering-role of the inductive inference,” e-Informatica Software Engineering Journal, Vol. 6, No. 1, 2012, pp. 71–77.
  • [3] L. Fernandez, P. J. Lara, and J. J. Cuadrado, “Efficient software quality assurance approaches oriented to UML models in real life,” Idea Group Pulishing, 2007, pp. 385–426.
  • [4] M. L. Hutcheson and L. Marnie, Software testing fundamentals. John Wiley & Sons, 2003.
  • [5] B. W. Boehm, “Understanding and controlling software costs,” Journal of Parametrics, Vol. 8, No. 1, 1988, pp. 32–68.
  • [6] G. Denaro and M. Pezzè, “An empirical evaluation of fault-proneness models,” in Proceedings of the 24rd International Conference on Software Engineering, 2002. ICSE 2002. IEEE, 2002, pp. 241–251.
  • [7] C. Kaner and W. P. Bond, “Software engineering metrics: What do they measure and how do we know?” in 10th International Software Metrics Symposium. IEEE, 2004, p. 6.
  • [8] N. E. Fenton and M. Neil, “Software metrics: successes, failures and new directions,” Journal of Systems and Software, Vol. 47, No. 2, 1999, pp. 149–157.
  • [9] T. Hall and N. Fenton, “Implementing effective software metrics programs,” IEEE Software, Vol. 14, No. 2, 1997, pp. 55–65.
  • [10] B. Turhan, T. Menzies, A. B. Bener, and J. Di Stefano, “On the relative value of cross-company and within-company data for defect prediction,” Empirical Software Engineering, Vol. 14, No. 5, 2009, pp. 540–578.
  • [11] M. T. Villalba, L. Fernández-Sanz, and J. Martínez, “Empirical support for the generation of domain-oriented quality models,” IET software, Vol. 4, No. 1, 2010, pp. 1–14.
  • [12] T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy, “Cross-project defect prediction: a large scale experiment on data vs. domain vs. process,” in Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering. ACM, 2009, pp. 91–100.
  • [13] M. Jureczko and L. Madeyski, “Towards identifying software project clusters with regard to defect prediction,” in Proceedings of the 6th International Conference on Predictive Models in Software Engineering. ACM, 2010, p. 9.
  • [14] R. Subramanyam and M. S. Krishnan, IEEE Transactions on Software Engineering, Vol. 29, No. 4, 2003, pp. 297–310.
  • [15] N. Nagappan, T. Ball, and A. Zeller, “Mining metrics to predict component failures,” in Proceedings of the 28th international conference on Software engineering. ACM, 2006, pp. 452–461.
  • [16] M. Jureczko and D. Spinellis, “Using object-oriented design metrics to predict 34 Marian Jureczko, Lech Madeyski software defects,” in Models and Methods of System Dependability. Oficyna Wydawnicza Politechniki Wrocławskiej, 2010, pp. 69–81.
  • [17] M. Jureczko and L. Madeyski, “Predykcja defektów na podstawie metryk oprogramowania – identyfikacja klas projektów,” in Proceedings of the Krajowa Konferencja Inżynierii Oprogramowania (KKIO 2010). PWNT, 2010, pp. 185–192.
  • [18] Y. Liu, T. M. Khoshgoftaar, and N. Seliya, “Evolutionary optimization of software quality modeling with multiple repositories,” Software Engineering, IEEE Transactions on, Vol. 36, No. 6, 2010, pp. 852–864.
  • [19] Z. He, F. Shu, Y. Yang, M. Li, and Q. Wang, “An investigation on the feasibility of cross-project defect prediction,” Automated Software Engineering, Vol. 19, No. 2, 2012, pp. 167–199.
  • [20] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Transactions on Software Engineering, Vol. 20, No. 6, 1994, pp. 476–493.
  • [21] B. H. Sellers, Object-Oriented Metrics. Measures of Complexity. Prentice Hall, 1996.
  • [22] R. Martin, “OO design quality metrics,” An analysis of dependencies, 1994.
  • [23] J. Bansiya and C. G. Davis, “A hierarchical model for object-oriented design quality assessment,” IEEE Transactions on Software Engineering, Vol. 28, No. 1, 2002, pp. 4–17.
  • [24] M.-H. Tang, M.-H. Kao, and M.-H. Chen, “An empirical study on object-oriented metrics,” in Sixth International Software Metrics Symposium, 1999. Proceedings. IEEE, 1999, pp. 242–249.
  • [25] T. J. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, No. 4, 1976, pp. 308–320.
  • [26] L. Madeyski, reproducer: Reproduce Statistical Analyses and Meta-Analyses, 2015, R package. [Online]. http://CRAN.R-project.org/package=reproducer
  • [27] L. Madeyski and B. A. Kitchenham, “Reproducible Research – What, Why and How,” Wroclaw University of Technology, PRE W08/2015/P-020, 2015.
  • [28] L. Madeyski, B. A. Kitchenham, and S. L. Pfleeger, “Why Reproducible Research is Beneficial for Security Research,” (under review), 2015.
  • [29] J. K. Chhabra and V. Gupta, “A survey of dynamic software metrics,” Journal of computer science and technology, Vol. 25, No. 5, 2010, pp. 1016–1029.
  • [30] S. Misra, M. Koyuncu, M. Crasso, C. Mateos, and A. Zunino, “A suite of cognitive complexity metrics,” in Computational Science and Its Applications–ICCSA 2012. Springer, 2012, pp. 234–247.
  • [31] L. Madeyski and M. Jureczko, “Which Process Metrics Can Significantly Improve Defect Prediction Models? An Empirical Study,” Software Quality Journal,Vol. 23, No. 3, 2015, pp. 393–422. [Online]. http://dx.doi.org/10.1007/s11219-014-9241-7
  • [32] M. Jureczko and J. Magott, “QualitySpy: a framework for monitoring software development processes,” Journal of Theoretical and Applied Computer Science, Vol. 6, No. 1, 2012, pp. 35–45.
  • [33] E. J. Weyuker, T. J. Ostrand, and R. M. Bell, “Comparing the effectiveness of several modeling methods for fault prediction,” Empirical Software Engineering, Vol. 15, No. 3, 2010, pp. 277–295.
  • [34] L. Madeyski, Test-driven development: An empirical evaluation of agile practice. Springer,2010.
  • [35] S. H. Kan, Metrics and models in software quality engineering. Addison-Wesley Longman Publishing Co., Inc., 2002.
  • [36] M. Fischer, M. Pinzger, and H. Gall, “Populating a release history database from version control and bug tracking systems,” in International Conference on Software Maintenance, 2003. ICSM 2003. Proceedings. IEEE, 2003, pp. 23–32.
  • [37] T. Zimmermann, R. Premraj, and A. Zeller, “Predicting defects for eclipse,” in International Workshop on Predictor Models in Software Engineering, 2007. PROMISE’07: ICSE Workshops 2007. IEEE, 2007, pp. 9–9.
  • [38] M. D’Ambros, A. Bacchelli, and M. Lanza, “On the impact of design flaws on software defects,” in Quality Software (QSIC), 2010 10th International Conference on. IEEE, 2010, pp. 23–31.
  • [39] M. D’Ambros, M. Lanza, and R. Robbes, “An extensive comparison of bug prediction approaches,” in 7th IEEE Working Conference on Mining Software Repositories (MSR), 2010. IEEE, 2010, pp. 31–41.
  • [40] A. Bacchelli, M. D’Ambros, and M. Lanza, “Are popular classes more defect prone?” in Fundamental Approaches to Software Engineering.Springer, 2010, pp. 59–73.
  • [41] G. Antoniol, K. Ayari, M. Di Penta, F. Khomh, and Y.-G. Guéhéneuc, “Is it a bug or an enhancement?: a text-based approach to classify change requests,” in Proceedings of the 2008 conference of the center for advanced studies on collaborative Cross–Project Defect Prediction with Respect to Code Ownership Model: an Empirical Study 35 research: meeting of minds. ACM, 2008, p. 23.
  • [42] T. J. Ostrand, E. J. Weyuker, and R. M. Bell, “Where the bugs are,” in ACM SIGSOFT Software Engineering Notes, Vol. 29, No. 4. ACM, 2004, pp. 86–96.
  • [43] V. R. Basili, L. C. Briand, and W. L. Melo, “A validation of object-oriented design metrics as quality indicators,” IEEE Transactions on Software Engineering, Vol. 22, No. 10, 1996, pp.751–761.
  • [44] F. Brito e Abreu and W. Melo, “Evaluating the impact of object-oriented design on software quality,” in Proceedings of the 3rd International Software Metrics Symposium, 1996. IEEE, 1996,pp. 90–99.
  • [45] W. L. Melo, L. Briand, and V. R. Basili, “Measuring the impact of reuse on quality and productivity in object-oriented systems,” 1998.
  • [46] K. Aggarwal, Y. Singh, A. Kaur, and R. Malhotra,“Empirical study of object-oriented metrics,”Journal of Object Technology, Vol. 5, No. 8, 2006,pp. 149–173.
  • [47] P. Martenka and B. Walter, “Hierarchical model for evaluating software design quality,” e-Informatica Software Engineering Journal, Vol. 4, No. 1, 2010, pp. 21–30.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-3de0e99b-a8ca-4dcd-937a-2d10208dedac
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.