PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A Systematic Review of Ensemble Techniques for Software Defect and Change Prediction

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Background: The use of ensemble techniques have steadily gained popularity in several software quality assurance activities. These aggregated classifiers have proven to be superior than their constituent base models. Though ensemble techniques have been widely used in key areas such as Software Defect Prediction (SDP) and Software Change Prediction (SCP), the current state-of-the-art concerning the use of these techniques needs scrutinization. Aim: The study aims to assess, evaluate and uncover possible research gaps with respect to the use of ensemble techniques in SDP and SCP. Method: This study conducts an extensive literature review of 77 primary studies on the basis of the category, application, rules of formulation, performance, and possible threats of the proposed/utilized ensemble techniques. Results: Ensemble techniques were primarily categorized on the basis of similarity, aggregation, relationship, diversity, and dependency of their base models. They were also found effective in several applications such as their use as a learning algorithm for developing SDP/SCP models and for addressing the class imbalance issue. Conclusion: The results of the review ascertain the need of more studies to propose, assess, validate, and compare various categories of ensemble techniques for diverse applications in SDP/SCP such as transfer learning and online learning.
Rocznik
Strony
art. no. 220105
Opis fizyczny
Bibliogr. 120 poz., rys., tab.
Twórcy
autor
  • Department of Computer Science, Sri Guru Gobind Singh College of Commerce, University of Delhi
Bibliografia
  • 1. N.E. Fenton and N. Ohlsson, “Quantitative analysis of faults and failures in a complex software system,” IEEE Transactions on Software Engineering, Vol. 26, No. 8, Aug. 2000, pp. 797–814.
  • 2. A.G. Koru and J. Tian. “Comparing high-change modules and modules with the highest measurement values in two large-scale open-source products”. IEEE Transactions on Software Engineering, Vol. 31, No. 8, Aug. 2005, pp. 625–642.
  • 3. S. Lessmann, B. Baesens, C. Mues and S. Pietsch. “Benchmarking classification models for software defect prediction: A proposed framework and novel findings”. IEEE Transactions on Software Engineering, Vol. 34, No. 4, May 2008, pp. 485–496.
  • 4. N. Seliya, T.M. Khoshgoftaar and V.J. Hulse. Predicting faults in high assurance software. In 2010 IEEE 12th International Symposium on High Assurance Systems Engineering , IEEE, Nov. 2010, pp. 26-34.
  • 5. R. Malhotra and M. Khanna. An exploratory study for software change prediction in object-oriented systems using hybridized techniques. Automated Software Engineering, Vol. 24, No. 3, Sep. 2017, pp. 673–717.
  • 6. A.G. Koru and H. Liu. “Identifying and characterizing change-prone classes in two large-scale open-source products”. Journal of Systems and Software, Vol. 80, No. 1, Jan. 2007, pp. 63–73.
  • 7. D. Romano and M. Pinzger. Using source code metrics to predict change-prone java interfaces. In 2011 27th IEEE International Conference on Software Maintenance (ICSM) , pages 303–312. IEEE, Sep 2011.
  • 8. E. Giger, M. Pinzger and H.C. Gall. “Can we predict types of code changes? an empirical analysis”. In 2012 9th IEEE Working Conference on Mining Software Repositories (MSR), IEEE, June 2012, pp. 217–226.
  • 9. M.O. Elish and M. Al Khiaty. “A suite of metrics for quantifying historical changes to predict future change-prone classes in object-oriented software”. Journal of Software: Evolution and Process, Vol. 25, No. 5, May 2013, pp. 407–437.
  • 10. R. Malhotra, “A systematic review of machine learning techniques for software fault prediction” Applied Soft Computing, Vol. 27, Feb. 2015, pp. 504–518.
  • 11. R. S. Wahono. A systematic literature review of software defect prediction: research trends, datasets, methods and frameworks. Journal of Software Engineering, Vol. 1, No. 1, Apr. 2015, pp. 1-16
  • 12. A. Idri, M. Hosni and A. Abran Systematic literature review of ensemble effort estimation. Journal of Systems and Software , Volume 118:151–175, Aug 2016.
  • 13. R. Malhotra and M. Khanna “Software Change Prediction: A Systematic Review and Future Guidelines,” e-Informatica Software Engineering Journal, Vol. 13, No. 1, 2019, pp. 227–259
  • 14. L.I. Kuncheva and C.J. Whitaker, “Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy,” Machine Learning, Vol. 51, No. 2, May 2003, pp. 181–207
  • 15. L. Jonsson, M. Borg, D. Broman, K. Sandahl, S. Eldh et al., “Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts,” Empirical Software Engineering, Vol. 21, No. 4, Aug. 2016, pp. 1533–1578.
  • 16. S.S. Rathore and S. Kumar, “Linear and non-linear heterogeneous ensemble methods to predict the number of faults in software systems,” Knowledge-Based Systems, Vol. 119, Mar. 2017, pp. 232–256.
  • 17. R. Malhotra and M. Khanna, “Particle swarm optimization-based ensemble learning for software change prediction,” Information and Software Technology, Vol. 102, Oct. 2018, pp. 65–84.
  • 18. M. Re and G. Valentini, “Ensemble methods: A review,” in Advances in Machine Learning and Data Mining for Astronomy, Data Mining and Knowledge Discovery. Chapman-Hall, 2012, pp. 563–594.
  • 19. V. Bolón-Canedo and A. Alonso-Betanzos, “Ensembles for feature selection: A review and future trends,” Information Fusion, Vol. 52, Dec. 2019, pp. 1–12.
  • 20. M. Galar, A. Fernandez, E. Barrenechea, H. Bustince, and F. Herrera, “A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 42, No. 4, Aug. 2011, pp. 463–484.
  • 21. D. Radjenović, M. Heričko, R. Torkar, and A. Živkovič, “Software fault prediction metrics: A systematic literature review,” Information And Software Technology, Vol. 55, No. 8, Aug. 2013, pp. 1397–1418.
  • 22. C. Catal, “Software fault prediction: A literature review and current trends,” Expert Systems With Applications, Vol. 38, No. 4, Apr. 2011, pp. 4626–4636.
  • 23. S. Hosseini, B. Turhan, and D. Gunarathna, “A systematic literature review and meta-analysis on cross project defect prediction,” IEEE Transactions on Software Engineering, Vol. 45, No. 2, Nov. 2017, pp. 111–147.
  • 24. R. Malhotra, M. Khanna, and R.R. Raje, “On the application of search-based techniques for software engineering predictive modeling: A systematic review and future directions,” Swarm and Evolutionary Computation, Vol. 32, Feb. 2017, pp. 85–109.
  • 25. R. Malhotra and M. Khanna, “Threats to validity in search-based predictive modelling for software engineering,” IET Software, Vol. 12, No. 4, Jun. 2018, pp. 293–305.
  • 26. C. Catal and B. Diri, “A systematic review of software fault prediction studies,” Expert Systems With Applications, Vol. 36, No. 4, May 2009, pp. 7346–7354.
  • 27. T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, “A systematic literature review on fault prediction performance in software engineering,” IEEE Transactions on Software Engineering, Vol. 38, No. 6, Oct. 2011, pp. 1276–1304.
  • 28. R. Malhotra and A.J. Bansal, “Software change prediction: A literature review,” International Journal of Computer Applications in Technology, Vol. 54, No. 4, Nov. 2016, pp. 240–256.
  • 29. B.A. Kitchenham, D. Budgen, and P. Brereton, Evidence-based software engineering and systematic reviews. CRC press, Nov. 2015, Vol. 4.
  • 30. G. Catolino and F. Ferrucci, “An extensive evaluation of ensemble techniques for software change prediction,” Journal of Software: Evolution and Process, Mar. 2019, p. e2156.
  • 31. X. Zhu, Y. He, L. Cheng, X. Jia, and L. Zhu, “Software change-proneness prediction through combination of bagging and resampling methods,” Journal of Software: Evolution and Process, Vol. 30, No. 12, Oct. 2018, p. e2111.
  • 32. J. Wen, S. Li, Z. Lin, Y. Hu, and C. Huang, “Systematic literature review of machine learning based software development effort estimation models,” Information and Software Technology, Vol. 54, No. 1, Jan. 2012, pp. 41–59.
  • 33. Y. Jiang, B. Cukic, and T. Menzies, “Fault prediction using early lifecycle data,” in The 18th International Symposium on Software Reliability (ISSRE’07). IEEE, Nov. 2007, pp. 237–246.
  • 34. E. Rubinić, G. Mauša, and T.G. Grbac, “Software defect classification with a variant of NSGA-II and simple voting strategies,” in International Symposium on Search Based Software Engineering. Springer, Sep. 2015, pp. 347–353.
  • 35. A. Ali, M. Abu-Tair, J. Noppen, S. McClean, Z. Lin et al., “Contributing features-based schemes for software defect prediction,” in International Conference on Innovative Techniques and Applications of Artificial Intelligence. Springer, Dec. 2019, pp. 350–361.
  • 36. Y. Ma, L. Guo, and B. Cukic, “A statistical framework for the prediction of fault-proneness,” in Advances in Machine Learning Applications in Software Engineering. IGI Global, 2007, pp. 237–263.
  • 37. M.J. Siers and M.Z. Islam, “Software defect prediction using a cost sensitive decision forest and voting, and a potential solution to the class imbalance problem,” Information Systems, Vol. 51, Jul. 2015, pp. 62–71.
  • 38. J.R. Campos, E. Costa, and M. Vieira, “Improving failure prediction by ensembling the decisions of machine learning models: A case study,” IEEE Access, Vol. 7, Dec. 2019, pp. 177 661–177 674.
  • 39. G. Li and S. Wang, “Oversampling boosting for classification of imbalanced software defect data,” in 35th Chinese Control Conference (CCC). IEEE, Jul. 2016, pp. 4149–4154.
  • 40. H. Jia, F. Shu, Y. Yang, and Q. Wang, “Predicting fault-prone modules: A comparative study,” in International Conference on Software Engineering Approaches for Offshore and Outsourced Development. Springer, Jul. 2009, pp. 45-59.
  • 41. R. Malhotra, “An empirical framework for defect prediction using machine learning techniques with Android software,” Applied Soft Computing, Vol. 49, Dec. 2016, pp. 1034-1050.
  • 42. L. Gong, S. Jiang, and L. Jiang, “An improved transfer adaptive boosting approach for mixed-project defect prediction,” Journal of Software: Evolution and Process, Vol. 31, No. 10, Oct. 2019, p. e2172.
  • 43. T.M. Khoshgoftaar, P. Rebours, and N. Seliya, “Software quality analysis by combining multiple projects and learners,” Software Quality Journal, Vol. 17, No. 1, Mar. 2009, pp. 25-49.
  • 44. D. Ryu, O. Choi, and J. Baik, “Value-cognitive boosting with a support vector machine for cross-project defect prediction,” Empirical Software Engineering, Vol. 21, No. 1, Feb. 2016, pp. 43–71.
  • 45. H. He, X. Zhang, Q. Wang, J. Ren, J. Liu et al., “Ensemble multiboost based on ripper classifier for prediction of imbalanced software defect data,” IEEE Access, Vol. 7, Aug. 2019, pp. 110 333–110 343.
  • 46. T. Mende and R. Koschke, “Revisiting the evaluation of defect prediction models,” in Proceedings of the 5th International Conference on Predictor Models in Software Engineering, May 2009, pp. 1–10.
  • 47. J. Petrić, D. Bowes, T. Hall, B. Christianson, and N. Baddoo, “Building an ensemble for software defect prediction based on diversity selection,” in Proceedings of the 10th ACM/IEEE International symposium on empirical software engineering and measurement, Sep. 2016, pp. 1–10.
  • 48 . L. Kumar, S. Lal, A. Goyal, and N. Murthy, “Change-proneness of object-oriented software using combination of feature selection techniques and ensemble learning techniques,” in Proceedings of the 12th Innovations on Software Engineering Conference (formerly known as India Software Engineering Conference). ACM, Feb. 2019, p. 8.
  • 49. C. Seiffert, T.M. Khoshgoftaar, and J. Van Hulse, “Improving software-quality predictions with data sampling and boosting,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 39, No. 6, Sep. 2009, pp. 1283–1294.
  • 50. T. Wang, Z. Zhang, X. Jing, and L. Zhang, “Multiple kernel ensemble learning for software defect prediction,” Automated Software Engineering, Vol. 23, No. 4, Dec. 2016, pp. 569–590.
  • 51. Z. Li, X.Y. Jing, X. Zhu, H. Zhang, B. Xu et al., “Heterogeneous defect prediction with two-stage ensemble learning,” Automated Software Engineering, Vol. 26, No. 3, 2019,pp. 599–651.
  • 52. E. Arisholm, L.C. Briand, and E.B. Johannessen, “A systematic and comprehensive investigation of methods to build and evaluate fault prediction models,” Journal of Systems and Software, Vol. 83, No. 1, Jan. 2010, pp. 2–17.
  • 53. T. Wang, Z. Zhang, X. Jing, and Y. Liu, “Non-negative sparse-based semiboost for software defect prediction,” Software Testing, Verification and Reliability, Vol. 26, No. 7, Nov. 2016, pp. 498–515.
  • 54. R. Li, L. Zhou, S. Zhang, H. Liu, X. Huang et al., “Software defect prediction based on ensemble learning,” in Proceedings of the 2019 2nd International conference on data science and information technology, Jul. 2019, pp. 1–6.
  • 55. Y. Liu, T.M. Khoshgoftaar, and N. Seliya, “Evolutionary optimization of software quality modeling with multiple repositories,” IEEE Transactions on Software Engineering, Vol. 36, No. 6, May 2010, pp. 852–864.
  • 56. X. Xia, D. Lo, S.J. Pan, N. Nagappan, and X. Wang, “Hydra: Massively compositional model for cross-project defect prediction,” IEEE Transactions on software Engineering, Vol. 42, No. 10, Nov. 2016, pp. 977–998.
  • 57. R. Malhotra and S. Kamal, “An empirical study to investigate oversampling methods for improving software defect prediction using imbalanced data,” Neurocomputing, Vol. 343, May 2019, pp. 120-140.
  • 58. J. Zheng, “Cost-sensitive boosting neural networks for software defect prediction,” Expert Systems with Applications, Vol. 37, No. 6, Jun. 2010, pp. 4537-4543.
  • 59. H. Alsawalqah, H. Faris, I. Aljarah, L. Alnemer, and N. Alhindawi, “Hybrid SMOTE-ensembleapproach for software defect prediction,” in Computer Science on-Line Conference. Springer, Apr. 2017, pp. 355-366.
  • 60. R. Malhotra and M. Khanna, “Dynamic selection of fitness function for software change prediction using particle swarm optimization,” Information and Software Technology, Vol. 112, Aug. 2019, pp. 51-67.
  • 61. D. Di Nucci, F. Palomba, R. Oliveto, and A. De Lucia, “Dynamic selection of classifiers in bug prediction: An adaptive method,” IEEE Transactions on Emerging Topics in Computational Intelligence, Vol. 1, No. 3, May 2017, pp. 202-212.
  • 62. H. Tong, B. Liu, and S.Wang, “Kernel spectral embedding transfer ensemble for heterogeneous defect prediction,” IEEE Transactions on Software Engineering, Vol. 14, No. 8, Sep. 2019, pp. 1-21.
  • 63. A.T. Mısırlı, A.B. Bener, and B. Turhan, “An industrial case study of classifier ensembles for locating software defects,” Software Quality Journal, Vol. 19, No. 3, Sep. 2011, pp. 515-536.
  • 64. L. Kumar, S. Misra, and S.K. Rath, “An empirical analysis of the effectiveness of software metrics and fault prediction model for identifying faulty classes,” Computer Standards and Interfaces, Vol. 53, Aug. 2017, pp. 1-32.
  • 65. H.D. Tran, L.T.M. Hanh, and N.T. Binh, “Combining feature selection, feature learning and ensemble learning for software fault prediction,” in 11th International Conference on Knowledge and Systems Engineering (KSE). IEEE, Oct. 2019, pp. 1-8.
  • 66. Y. Peng, G. Kou, G. Wang, W. Wu, and Y. Shi, “Ensemble of software defect predictors: an AHP-based evaluation method,” International Journal of Information Technology and Decision Making, Vol. 10, No. 01, Jan. 2011, pp. 187-206.
  • 67. R. Malhotra and M. Khanna, “An empirical study for software change prediction using imbalanced data,” Empirical Software Engineering, Vol. 22, No. 6, Dec. 2017, pp. 2806-2851.
  • 68. T. Zhou, X. Sun, X. Xia, B. Li, and X. Chen, “Improving defect prediction with deep forest,” Information and Software Technology, Vol. 114, Oct. 2019, pp. 204-216.
  • 69. N. Seliya and T.M. Khoshgoftaar, “The use of decision trees for cost-sensitive classification: an empirical study in software quality prediction,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 1, No. 5, Sep. 2011, pp. 448-459.
  • 70. D. Ryu, J.I. Jang, and J. Baik, “A transfer cost-sensitive boosting approach for cross-project defect prediction,” Software Quality Journal, Vol. 25, No. 1, Mar. 2017, pp. 235–272.
  • 71. R. Abbas, F.A. Albalooshi, and M. Hammad, “Software change proneness prediction using machine learning,” in International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT). IEEE, Dec. 2020, pp. 1-7
  • 72. K. Gao, T.M. Khoshgoftaar, and A. Napolitano, “A hybrid approach to coping with high dimensionality and class imbalance for software defect prediction,” in 11th international conference on machine learning and applications, Vol. 2. IEEE, Dec. 2012, pp. 281–288.
  • 73. C.W. Yohannese, T. Li, M. Simfukwe, and F. Khurshid, “Ensembles based combined learning for improved software fault prediction: A comparative study,” in 12th International conference on intelligent systems and knowledge engineering (ISKE). IEEE, Nov. 2017, pp. 1–6.
  • 74. H. Aljamaan and A. Alazba, “Software defect prediction using tree-based ensembles,” in Proceedings of the 16th ACM international conference on predictive models and data analytics in software engineering, Nov. 2020, pp. 1–10.
  • 75. Z. Sun, Q. Song, and X. Zhu, “Using coding-based ensemble learning to improve software defect prediction,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 42, No. 6, Dec. 2012, pp. 1806–1817.
  • 76. A. Agrawal and R.K. Singh, “Empirical validation of OO metrics and machine learning algorithms for software change proneness prediction,” in Towards Extensible and Adaptable Methods in Computing. Springer, Nov. 2018, pp. 69–84.
  • 77. A.A. Ansari, A. Iqbal, and B. Sahoo, “Heterogeneous defect prediction using ensemble learning technique,” in Artificial Intelligence and Evolutionary Computations in Engineering Systems. Springer, 2020, pp. 283–293.
  • 78. S. Wang, L.L. Minku, and X. Yao, “Online class imbalance learning and its applications in fault detection,” International Journal of Computational Intelligence and Applications, Vol. 12, No. 4, Dec. 2013, p. 1340001.
  • 79. D. Bowes, T. Hall, and J. Petrić, “Software defect prediction: Do different classifiers find the same defects?” Software Quality Journal, Vol. 26, No. 2, Jun. 2018, pp. 525–552.
  • 80. M. Banga and A. Bansal, “Proposed software faults detection using hybrid approach,” Security and Privacy, Jan. 2020, p. e103.
  • 81. S. Wang and X. Yao, “Using class imbalance learning for software defect prediction,” IEEE Transactions on Reliability, Vol. 62, No. 2, Apr. 2013, pp. 434–443.
  • 82. L. Chen, B. Fang, Z. Shang, and Y. Tang, “Tackling class overlap and imbalance problems in software defect prediction,” Software Quality Journal, Vol. 26, No. 1, Jun. 2018, pp. 97-125.
  • 83. E. Elahi, S. Kanwal, and A.N. Asif, “A new ensemble approach for software fault prediction,” in 17th International Bhurban Conference on Applied Sciences and Technology (IBCAST). IEEE, Jan. 2020, pp. 407–412.
  • 84. A. Kaur and K. Kaur, “Performance analysis of ensemble learning for predicting defects in open source software,” in international Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, Sep. 2014, pp. 219–225.
  • 85. S.A. El-Shorbagy, W.M. El-Gammal, and W.M. Abdelmoez, “Using SMOTE and heterogeneous stacking in ensemble learning for software defect prediction,” in Proceedings of the 7th International Conference on Software and Information Engineering, May 2018, pp. 44–47.
  • 86. L. Goel, M. Sharma, S.K. Khatri, and D. Damodaran, “Defect prediction of cross projects using PCA and ensemble learning approach,” in Micro-Electronics and Telecommunication Engineering. Springer, 2020, pp. 307–315.
  • 87 A. Panichella, R. Oliveto, and A. De Lucia, “Cross-project defect prediction models: L’union fait la force,” in Software Evolution Week-IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE). IEEE, Feb. 2014, pp. 164–173.
  • 88. R. Malhotra and A. Bansal, “Investigation of various data analysis techniques to identify change prone parts of an open source software,” International Journal of System Assurance Engineering and Management, Vol. 9, No. 2, Apr. 2018, pp. 401–426.
  • 89. T.T. Khuat and M.H. Le, “Evaluation of sampling-based ensembles of classifiers on imbalanced data for software defect prediction problems,” SN Computer Science, Vol. 1, No. 2, Mar. 2020, pp. 1–16.
  • 90. D. Rodriguez, I. Herraiz, R. Harrison, J. Dolado, and J.C. Riquelme, “Preliminary comparison of techniques for dealing with imbalance in software defect prediction,” in Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, May 2014, pp. 1–10.
  • 91. R. Malhotra and J. Jain, “Handling imbalanced data using ensemble learning in software defect prediction,” in 10th International Conference on Cloud Computing, Data Science and Engineering (Confluence). IEEE, Jan. 2020, pp. 300–304.
  • 92. V. Suma, T. Pushphavathi, and V. Ramaswamy, “An approach to predict software project success based on random forest classifier,” in ICT and Critical Infrastructure: Proceedings of the 48th Annual Convention of Computer Society of India-Vol II. Springer, 2014, pp. 329–336.
  • 93. R. Mousavi, M. Eftekhari, and F. Rahdari, “Omni-ensemble learning (OEL): utilizing over-bagging, static and dynamic ensemble selection approaches for software defect prediction,” International Journal on Artificial Intelligence Tools, Vol. 27, No. 6, Sep. 2018, p. 1850024.
  • 94. S.K. Pandey, R.B. Mishra, and A.K. Tripathi, “BPDET: An effective software bug prediction model using deep representation and ensemble learning techniques,” Expert Systems with Applications, Vol. 144, Apr. 2020, p. 113085.
  • 95. L. Chen, B. Fang, Z. Shang, and Y. Tang, “Negative samples reduction in cross-company software defects prediction,” Information and Software Technology, Vol. 62, Jun. 2015, pp. 67–77.
  • 96. S. Moustafa, M.Y. ElNainay, N. El Makky, and M.S. Abougabal, “Software bug prediction using weighted majority voting techniques,” Alexandria Engineering Journal, Vol. 57, No. 4, Dec. 2018, pp. 2763–2774.
  • 97. S.S. Rathore and S. Kumar, “An empirical study of ensemble techniques for software fault prediction,” Applied Intelligence, Vol. 51, No. 6, Jun. 2021, pp. 3615–3644.
  • 98. M.O. Elish, H. Aljamaan, and I. Ahmad, “Three empirical studies on predicting software maintainability using ensemble methods,” Soft Computing, Vol. 19, No. 9, Sep. 2015, pp. 2511–2524.
  • 99. H. Tong, B. Liu, and S.Wang, “Software defect prediction using stacked denoising autoencoders and two-stage ensemble learning,” Information and Software Technology, Vol. 96, Apr. 2018, pp. 94–111.
  • 100. A.A. Saifan and L. Abu-wardih, “Software defect prediction based on feature subset selection and ensemble classification,” ECTI Transactions on Computer and Information Technology (ECTI-CIT), Vol. 14, No. 2, Oct. 2020, pp. 213–228.
  • 101. S. Hussain, J. Keung, A.A. Khan, and K.E. Bennin, “Performance evaluation of ensemble methods for software fault prediction: An experiment,” in Proceedings of the ASWEC 24th Australasian software engineering conference, Sep. 2015, pp. 91–95.
  • 102. Y. Zhang, D. Lo, X. Xia, and J. Sun, “Combined classifier for cross-project defect prediction: an extended empirical study,” Frontiers of Computer Science, Vol. 12, No. 2, 2018, pp. 280–296.
  • 103. F. Yucalar, A. Ozcift, E. Borandag, and D. Kilinc, “Multiple-classifiers in software quality engineering: Combining predictors to improve software fault prediction ability,” Engineering Science and Technology, an International Journal, Vol. 23, No. 4, Aug. 2020, pp. 938–950.
  • 104. I.H. Laradji, M. Alshayeb, and L. Ghouti, “Software defect prediction using ensemble learning on selected features,” Information and Software Technology, Vol. 58, Feb. 2015, pp. 388–402.
  • 105. L. Rokach, “Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography,” Computational statistics & data analysis, Vol. 53, No. 12, Oct. 2009, pp. 4046–4072.
  • 106. C. Sammut and G.I. Webb, Eds., Encyclopedia of Machine Learning. Springer Science & Business Media, 2011.
  • 107. O. Sagi and L. Rokach, “Ensemble learning: A survey,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 8, No. 4, 2018, p. e1249.
  • 108. A.J. Sharkey, “Types of multinet system,” in International Workshop on Multiple ClassifierSystems. Springer, Jun. 2002, pp. 108–117.
  • 109. H. He and E.A. Garcia, “Learning from imbalanced data,” IEEE Transactions on Knowledgeand Data Engineering, Vol. 21, No. 9, Jun. 2009, pp. 1263–1284.
  • 110. M. Tan, L. Tan, S. Dara and C. Mayeux. Online defect prediction for imbalanced data. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering , IEEE, vol. 2, pages 99–108, May 2015.
  • 111. T. Fawcett. An introduction to ROC analysis. Pattern recognition letters , Volume 27(8):861–874, Jun 2006.
  • 112. T. Menzies, A. Dekhtyar, J. Distefano and J. Greenwald. Problems with Precision: A Response to” comments on’data mining static code attributes to learn defect predictors’. IEEE Transactions on Software Engineering , Volume 33(9):637–640, Aug 2007.
  • 113. J. Derrac, S. García, D. Molina and F. Herrera. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation , Volume 1(1):3–18, Mar 2011.
  • 114. T.G. Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems , Springer, Berlin, Heidelberg, pages 1–15, June 2000.
  • 115. F. Eibe, M.A. Hall, and I.H. Witten, “The WEKA workbench. online appendix for data mining: practical machine learning tools and techniques,” in Morgan Kaufmann. Elsevier Amsterdam, The Netherlands, 2016.
  • 116. T.D. Cook, D.T. Campbell and A. Day. Quasi-experimentation: Design & analysis issues for field settings . Vol. 351, Boston: Houghton Mifflin, 1979.
  • 117. W. Fu, V. Nair and T. Menzies. Why is differential evolution better than grid search for tuning defect predictors?. arXiv preprint arXiv:1609.02613. , 2016.
  • 118. C. Tantithamthavorn, S. McIntosh, A.E. Hassan and K. Matsumoto. The impact of automated parameter optimization on defect prediction models. IEEE Transactions on Software Engineering , Volume 45(7):683–711, Jan 2018.
  • 119. S. Omri and C. Sinz. Deep Learning for Software Defect Prediction: A Survey. In Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops , pages 209–214, June 2020.
  • 120. E.N. Akimova, A.Y. Bersenev, A.A. Deikov, K.S. Kobylkin, A.V. Konygin, I.P. Mezentsev and V.E. Misilov. “A Survey on Software Defect Prediction Using Deep Learning,” Mathematics, Vol. 9, No. 11, Jan. 2021, p. 1180.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-79a52b8d-a0ff-468c-a641-ab90332ae88d
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.