Tytuł artykułu
Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The testing is an integral part of the software development. At the same time, the manual creation of individu-al test cases is a lengthy and error-prone process. Hence, an intensive research on automated test generation methods is ongoing for more than twenty years. There are many vastly different approaches, which can be considered automated test case generation. However, a common feature is the generation of the data for the test cases. Ultimately, the test data decide the program branching and can be used on any testing level, starting with the unit tests and ending with the tests focused on the behavior of the entire application. The test data are also mostly independent on any specific technology, such as programming language or paradigm. This paper is a survey of existing literature of the last two decades that deals with test data generation or with tests based on it. This survey is not a systematic literature review and it does not try to answer specific scientific questions formulated in advance. Its purpose is to map and categorize the existing methods and to summarize their common features. Such a survey can be helpful for any teams developing their methods for test data generation as it can be a starting point for the exploration of related work.
Rocznik
Tom
Strony
627--636
Opis fizyczny
Bibliogr. 86 poz., tab., wykr.
Twórcy
autor
- Department of Computer Science and Engineering/ NTIS – New Technologies for the Information Society, European Center of Excellence, Faculty of Applied Sciences, University of West Bohemia Univerzitni 8, 306 14 Plzen, Czech Republic
autor
- NTIS – New Technologies for the Information Society, European Center of Excellence/Department of Computer Science and Engineering, Faculty of Applied Sciences, University of West Bohemia Univerzitni 8, 306 14 Plzen, Czech Republic
Bibliografia
- 1. N. Gupta, A. P. Mathur, and M. L. Soffa, “Generating test data for branch coverage,” in Proceedings ASE 2000 - Fifteenth IEEE International Conference on Automated Software Engineering, Grenoble, September 2000, https://doi.org/10.1109/ASE.2000.873666
- 2. P. Fröhlich and J. Link, “Automated Test Case Generation from Dynamic Models,” in ECOOP '00: Proceedings of the 14th European Conference on Object-Oriented Programming, Cannes, June 2000, pp. 472-491, https://doi.org/10.1007/3-540-45102-1_23
- 3. B. S. Ahmed, K. Z. Zamli, W. Afzal, and M. Bures, “Constrained Interaction Testing: A Systematic Literature Study,” in IEEE Access, vol. 5, 2017, https://doi.org/10.1109/ACCESS. 2017.2771562
- 4. M. M. Almasi, H. Hemmati, G. Fraser, A. Arcuri, and J. Benefelds, “An industrial evaluation of unit test generation: Finding real faults in a financial application,” in Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP), Buenos Aires, May 2017, pp. 263–272, https://doi.org/10.1109/ICSE-SEIP.2017.27
- 5. J. Edvardsson, “A Survey on Automatic Test Data Generation,” in Proceedings of the Second Conference on Computer Science and Engineering, Linköping, October 1999, pp. 21–28.
- 6. S. Anand, E. K. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Gries-kamp, M. Harman, M. J. Harrold, and P. McMinn, “An orchestrated survey of methodologies for automated software test case generation,” in The Journal of Systems and Software, vol. 86, no. 8, 2013, pp. 1978-2001, https://doi.org/10.1016/j.jss.2013.02.061
- 7. S. Ali, L. C. Briand, H. Hemmati, and R. K. Panesar-Walawege, “A systematic review of the application and empirical investigation of search-based test case generation,” in IEEE Trans. Softw. Eng., vol. 36, no. 6, 2009, pp. 742–762, https://doi.org/10.1109/TSE.2009.52
- 8. P. McMinn, “Search-based software test data generation: a survey,” in Softw. Test. Verif. Reliab., vol. 14, no. 2, 2004, pp. 105–156, https://doi.org/10.1002/stvr.294
- 9. R. Jeevarathinam and A. S. Thanamani, “A survey on mutation test-ing methods, fault classifications and automatic test cases genera-tion,” in J. Sci. Ind. Res., vol. 70, no. 2, 2011, pp. 113–117.
- 10. T. Chen, X. S. Zhang, S. Z. Guo, H. Y. Li, and Y. Wu, “State of the art: Dynamic symbolic execution for automated test generation,” in Futur. Gener. Comput. Syst., vol. 29, no. 7, 2013, pp. 1758–1773, https://doi.org/10.1016/j.future.2012.02.006
- 11. R. M. Parizi, A. A. A. Ghani, R. Abdullah, and R. Atan, “Empirical evaluation of the fault detection effectiveness and test effort efficiency of the automated AOP testing approaches,” in Inf. Softw. Technol., vol. 53, no. 10, 2011, https://doi.org/10.1016/j.infsof. 2011.05.004
- 12. S. Popić, B. Pavković, I. Velikić, and N. Teslić, “Data generators: a short survey of techniques and use cases with focus on testing,” in 2019 IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin), Berlin, September 2019, https://doi.org/10.1109/ICCE -Berlin47944.2019.8966202
- 13. P. Tramontana, D. Amalfitano, N. Amatucci, and A. R. Fasolino, “Automated functional testing of mobile applications: a systematic mapping study,” in Software Quality Journal, vol. 27, 2019, pp. 149–201, https://doi.org/10.1007/s11219-018-9418-6
- 14. A. Groce, K. Havelund, G. Holzmann, R. Joshi, and R.-G. Xu, “Establishing flight software reliability: testing, model checking, constraint-solving, monitoring and learning,” in Annals of Mathematics and Artificial Intelligence, vol. 70, 2014, pp. 315–349, https://doi.org/10.1007/s10472-014-9408-8
- 15. M. Bures, “Automated testing in the Czech Republic: the current situation and issues,” in Proc. 15th Int. Conf. Comput. Syst. Technol., June 2014, pp. 294–301, https://doi.org/10.1145/2659532.2659605
- 16. S. J. Galler and B. K. Aichernig, “Survey on test data generation tools: An evaluation of white- and gray-box testing tools for C#, C++, Eiffel, and Java,” in Int. J. Softw. Tools Technol. Transf., vol. 16, no. 6, 2014, pp. 727–751, https://doi.org/10.1007/s10009-013-0272-3
- 17. U. R. Molina, F. Kifetew, and A. Panichella, “Java Unit Testing Tool Competition: Sixth round,” in SBST '18: Proceedings of the 11th International Workshop on Search-Based Software Testing, May 2018, pp. 22–29, https://doi.org/10.1145/3194718.3194728
- 18. X. Devroey, S. Panichella, and A. Gambi, “Java Unit Testing Tool Competition: Eighth Round,” in Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, June 2020, pp. 545–548, https://doi.org/10.1145/ 3387940.3392265
- 19. Y. Zheng, Y. Ma, and J. Xue, “Automated large-scale simulation test-data generation for object-oriented software systems,” in Proceedings of the 1st International Symposium on Data, Privacy, and E-Commerce (ISDPE 2007), Chengdu, November 2007, pp. 74-79, https://doi.org/10.1109/ISDPE.2007.104
- 20. A. Alsharif, G. M. Kapfhammer, and P. McMinn, “DOMINO: Fast and Effective Test Data Generation for Relational Database Schemas,” in 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST), Västeras, April 2018, pp. 12– 22, https://doi.org/10.1109/ICST.2018.00012
- 21. T. Sotiropoulos; S. Chaliasos, V. Atlidakis, D. Mitropoulos, and D. Spinellis, “Data-Oriented Differential Testing of Object-Relational Mapping Systems,” in 2021 IEEE/ACM 43rd International Conferen-ce on Software Engineering (ICSE), Madrid, May 2021, pp. 1535–1547, https://doi.org/10.1109/ICSE43902.2021.00137
- 22. S. Poulding and R. Feldt, “Generating Controllably Invalid and Atypical Inputs for Robustness Testing,” in Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Tokyo, March 2017, https://doi.org/ 10.1109/ICSTW.2017.21
- 23. N. T. Sy and Y. Deville, “Automatic test data generation for programs with integer and float variables,” in Proc. 16th Annu. Int. Conf. Autom. Softw. Eng. (ASE 2001), San Diego, November 2001, pp. 13–21, https://doi.org/10.1109/ASE.2001.989786
- 24. N. Gupta, A. P. Mathur, and M. L. Soffa, “Generating test data for branch coverage,” in Proc. ASE 2000 15th IEEE Int. Conf. Autom. Softw. Eng., Grenoble, September 2000, pp. 219–227, https://doi.org/ 10.1109/ASE.2000.873666
- 25. H. Huang, W.-T. Tsai, R. Paul, and Y. Chen, “Automated model checking and testing for composite Web services,” in Eighth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC'05), Seattle, May 2005, pp. 300–307, https:// doi.org/10.1109/ISORC.2005.16
- 26. D. T. Thu, L. D. Quang, D. A. Nguyen, and P. N. Hung, “A Method of Automated Mock Data Generation for RESTful API Testing,” in Proceedings - 2022 RIVF International Conference on Computing and Communication Technologies (RIVF 2022), Ho Chi Minh City, December 2022, https://doi.org/10.1109/RIVF55975.2022.10013835
- 27. D. T. Thu, D. A. Nguyen, P. N. Hung, “Automated Test Data Genera-tion for Typescript Web Applications,” in Proceedings – International Conference on Knowledge and Systems Engineering, Bangkok, November 2021, https://doi.org/10.1109/KSE53942.2021. 9648782
- 28. S. Poulding and J. A. Clark, “Efficient software verification: Statisti-cal testing using automated search,” in IEEE Trans. Softw. Eng., vol. 36, no. 6, 2010, pp. 763–777, https://doi.org/10.1109/TSE. 2010.24
- 29. J. Alava, T. M. King, and P. J. Clarke, “Automatic validation of java page flows using model-based coverage criteria,” in Proc. - Int. Comput. Softw. Appl. Conf., Chicaco, September 2006, pp. 439–446, https://doi.org/10.1109/COMPSAC.2006.32
- 30. M. Riebisch, I. Philippow, and M. Götze, “UML-Based Statistical Test Case Generation,” in LNCS 2591, 2003, pp. 394–411, https:// doi.org/ 10.1007/3-540-36557-5_28
- 31. L. Bao-Lin, L. Zhi-shu, L. Qing, and C. Y. Hong, “Test Case automa-te Generation from UML Sequence diagram and OCL expression,” in Proc. - 2007 Int. Conf. Comput. Intell. Secur., Harbin, December 2007, pp. 1048–1052, https://doi.org/10.1109/CIS.2007.150
- 32. Meiliana, I. Septian, R. S. Alianto, Daniel, and F. L. Gaol, “Automa-ted Test Case Generation from UML Activity Diagram and Sequence Diagram using Depth First Search Algorithm,” in Procedia Computer Science, vol. 116, 2017, pp. 629–637, https://dx.doi.org/10.1016/ j.procs.2017.10.029
- 33. Y. Zheng, J. Xue, and Y. Zhu, “ISDGen: An automated simulation data generation tool for object-oriented information systems,” in 2008 Asia Simul. Conf. - 7th Int. Conf. Syst. Simul. Sci. Comput., Beijing, October 2008, https://doi.org/10.1109/ASC-ICSC.2008. 4675401
- 34. M. Zhang, T. Yue, S. Ali, H. Zhang, and J. Wu, “A Systematic Approach to Automatically Derive Test Cases from Use Cases Specified in Restricted Natural Languages,” in LNCS, vol. 8769, 2014, pp. 142–157, https://doi.org/10.1007/978-3-319-11743-0_10
- 35. C. Wang, F. Pastore, A. Goknil, and L. C. Briand, “Automatic Generation of Acceptance Test Cases from Use Case Specifications: An NLP-Based Approach,” in IEEE Trans. on Softw. Eng., vol. 48, no. 2, 2022, https://doi.org/10.1109/TSE. 2020.2998503
- 36. M. Lafi, T. Alrawashed, and A. M. Hammad, “Automated Test Cases Generation from Requirements Specification,” in 2021 International Conference on Information Technology, Amman, July 2021, https://doi.org/10.1109/ICIT52682.2021.9491761
- 37. D. Xu, W. Xu, M. Tu, N. Shen, W. Chu, and C. H. Chang, “Automated Integration Testing Using Logical Contracts,” in IEEE Trans. Reliab., vol. 65, no. 3, 2016, pp. 1205–1222, https://doi.org/ 10.1109/TR.2015.2494685
- 38. O. N. Timo and G. Langelier, “Test Data Generation for Cyclic Executives with CBMC and Frama-C: A Case Study,” in Electron. Notes Theor. Comput. Sci., vol. 320, 2016, pp. 35–51, https://doi.org/ 10.1016/j.entcs.2016.01.004
- 39. K. Schneid, L. Stapper, S. Thone, and H. Kuchen, “Automated Regression Tests: A No-Code Approach for BPMN-based Process-Driven Applications,” in 2021 IEEE 25th International Enterprise Distributed Object Computing Conferenc (EDOC), Gold Coast, October 2021, https://doi.org/10.1109/EDOC52215.2021.00014
- 40. C. Fetzer and Z. Xiao, “An automated approach to increasing the robustness of C libraries,” in Proc. 2002 Int. Conf. Dependable Syst. Networks, Washington D.C., June 2002, pp. 155–164, https://doi.org/ 10.1109/DSN.2002.1028896
- 41. S. Scalabrino, M. Guerra, G. Grano, A. De Lucia, R. Oliveto, D. D. Nucci, and H. C. Gall, “Ocelot: A search-based test-data generation tool for C,” in ASE 2018 - Proceedings of the 33rd ACM/IEEE Int. Conf. on Autom. Softw. Eng., Montpellier, September 2018, pp. 868-871, https://doi.org/10.1145/ 3238147.3240477
- 42. H. Riener and G. Fey, “FAuST: A framework for formal verification, automated debugging, and software test generation,” in LNCS, vol. 7385, 2012, https://doi.org/10.1007/978-3-642-31759-0_17
- 43. H. Tanno, X. Zhang, T. Hoshino, and K. Sen, “TesMa and CATG: Automated Test Generation Tools for Models of Enterprise Applications,” in 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, May 2015, pp. 717–720, https://doi.org/10.1109/ICSE.2015.231
- 44. H. Ohbayashi, H. Kanuka, and C. Okamoto, “A Preprocessing Method of Test Input Generation by Symbolic Execution for Enterpri-se Application,” in 2018 25th Asia-Pacific Software Engineering Conference (APSEC), Nara, December 2018, https://doi.org/10.1109/ APSEC.2018.00104
- 45. T. Su et al., “Automated Coverage-Driven Test Data Generation Using Dynamic Symbolic Execution,” in 2014 Eighth Int. Conf. Softw. Secur. Reliab., San Francisco, June 2014, pp. 98–107, https://doi.org/10.1109/SERE.2014.23
- 46. L. Hao, J. Shi, T. Su, and Y. Huang, “Automated Test Generation for IEC 61131-3 ST Programs via Dynamic Symbolic Execution,” in 2019 International Symposium on Theoretical Aspects of Software Engineering (TASE), Guilin, July 2019, https://doi.org/10.1109/ TASE.2019.00004
- 47. W. He, J. Shi, T. Su, Z. Lu, L. Hao, and Y. Huang, “Automated test generation for IEC 61131-3 ST programs via dynamic symbolic execution,” in Science of Computer Programming, vol. 206, 2021, https://doi.org/10.1016/j.scico.2021.102608
- 48. K. Jamrozik, G. Fraser, N. Tillman, and J. De Halleux, “Generating test suites with augmented dynamic symbolic execution,” in LNCS, vol. 7942, 2013, pp. 152–167, https://doi.org/10.1007/978-3-642-38916-0_9
- 49. B. Chen, Z. Yang, L. Lei, K. Cong, and F. Xie, “Automated Bug Detection and Replay for COTS Linux Kernel Modules with Concolic Execution,” in 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), London (Canada), February 2020, https://10.1109/SANER48275.2020.9054797
- 50. T. A. Bui, L. N. Tung, H. V. Tran, and P. N. Hung, “A Method for Automated Test Data Generation for Units using Classes of Qt Frame-work in C++ Projects,” in 2022 RIVF International Conference on Computing and Communication Technologies (RIVF), Ho Chi Minh City, December 2022, https://10.1109/RIVF55975.2022.10013869
- 51. M. H. Do, L. N. Tung, H. V. Tran, and P. N. Hung, “An Automated Test Data Generation Method for Templates of C++ Projects,” in 2022 14th International Conference on Knowledge and Systems Engineering (KSE), Nha Trang, October 2022, https://doi.org/ 10.1109/ KSE56063.2022.9953626
- 52. T. Liu, Z. Wang, Y. Zhang, Z. Liu, B. Fang, and Z. Pang, “Automated Vulnerability Discovery System Based on Hybrid Execution,” in 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC), Guilin, July 2022, pp. 234-241, https://doi.org/10.1109/ DSC55868.2022.00038
- 53. K. Li, C. Reichenbach, Y. Smaragdakis, Y. Diao, and C. Csallner, “SEDGE: Symbolic example data generation for dataflow programs,” in 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), Silicon Valley, November 2013, https://doi.org/10.1109/ASE.2013.6693083
- 54. M. Kim, Y. Kim, and Y. Jang, “Industrial application of concolic testing on embedded software: Case studies,” in 2012 IEEE Fifth Inte-rnational Conference on Software Testing, Verification and Validati-on, Montreal, April 2012, https://doi.org/10.1109/ICST.2012.119
- 55. C. Ma, C. Du, T. Zhang, F. Hu, and X. Cai, “WSDL-Based Automated Test Data Generation for Web Service,” in 2008 Int. Conf. Comput. Sci. Softw. Eng., Wuhan, December 2008, pp. 731–737, https://doi.org/10.1109/CSSE.2008.790
- 56. W. Krenn and B. K. Aichernig, “Test Case Generation by Contract Mutation in Spec#,” in Electron. Notes Theor. Comput. Sci., vol. 253, no. 2, 2009, pp. 71–86, https://doi.org/10.1016/j.entcs.2009.09.052
- 57. M. Bozkurt and M. Harman, “Automatically generating realistic test input from web services,” in Proc. - 6th IEEE Int. Symp. Serv. Syst. Eng., Irvine, December 2011, pp. 13–24, https://doi.org/10.1109/ SOSE.2011.6139088
- 58. A. Arcuri, “RESTful API Automated Test Case Generation,” in 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS), Prague, July 2017, pp. 9–20, https://doi.org/10.1109/ QRS.2017.11
- 59. N. Havrikov, A. Gambi, A. Zeller, A. Arcuri, and J. P. Galeotti, “Generating unit tests with structured system interactions,” in 2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST), Buenos Aires, May 2017, pp. 30–33, https://doi.org/ 10.1109/AST.2017.2
- 60. S. Hanna and H. Jaber, “An Approach for Web Applications Test Data Generation Based on Analyzing Client Side User Input Fields,” in 2019 2nd International Conference on new Trends in Computing Sciences (ICTCS), Amman, October 2019, https://doi.org/10.1109/ ICTCS.2019.8923098
- 61. H. M. Sneed and K. Erdoes, “Testing big data (Assuring the quality of large databases),” in 2015 IEEE Eighth Int. Conf. Softw. Testing, Verif. Valid. Work., Graz, April 2015, pp. 1–6, https://doi.org/ 10.1109/ICSTW.2015.7107424
- 62. L. D. Toffola, C. A. Staicu, and M. Pradel, “Saying 'Hi!' is not enough: Mining inputs for effective test generation,” in 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), Urbana, October 2017, https://doi.org/10.1109/ ASE.2017.8115617
- 63. T. Li, X. Lu, and H. Xu, “Automated Test Case Generation from Input Specification in Natural Language,” in 2022 IEEE International Symposium on Software Reliability Engineering Workshops, Charlot-te, October 2022, https://doi.org/10.1109/ISSREW55968.2022.00076
- 64. T. Shu, Z. Ding, M. Chen, and J. Xia, “A heuristic transition executability analysis method for generating EFSM-specified protocol test sequences,” in Information Sciences, vol. 370–371, 2016, pp. 63–78, https://doi.org/10.1016/j.ins.2016.07.059
- 65. A. Rauf, S. Anwar, M. A. Jaffer, and A. A. Shahid, “Automated GUI test coverage analysis using GA,” in 7th Int. Conf. Inf. Technol. New Gener., Las Vegas, April 2010, pp. 1057–1062, https://doi.org/ 10.1109/ ITNG.2010.95
- 66. S. Khor and P. Grogono, “Using a genetic algorithm and formal concept analysis to generate branch coverage test data automatically,” in 19th Int. Conf. Autom. Softw. Eng., Linz, September 2004, pp. 346–349, https://doi.org/10.1109/ASE.2004.1342761
- 67. Z. J. Rashid and M. Fatih Adak, “Test Data Generation for Dynamic Unit Test in Java Language using Genetic Algorithm,” in 6th International Conference on Computer Science and Engineering (UBMK), Ankara, September 2021, https://doi.org/10.1109/ UBMK52708.2021.9558953
- 68. E. Diaz, J. Tuya, and R. Blanco, “Automated software testing using a metaheuristic technique based on Tabu search,” in 18th IEEE Int. Conf. Autom. Softw. Eng., Montreal, October 2003, pp. 310–313, https://doi.org/10.1109/ASE.2003.1240327
- 69. J. Khandelwal and P. Tomar, “Approach for automated test data generation for path testing in aspect-oriented programs using genetic algorithm,” in Int. Conf. Comput. Com. Autom., Greater Noida, May 2015, pp. 854–858, https://doi.org/10.1109/CCAA.2015.7148494
- 70. B. L. Li, Z. S. Li, J. Y. Zhang, and J. R. Sun, “An Automated Test Case Generation Approach by Genetic Simulated Annealing Algorithm,” in Third Int. Conf. Nat. Comput., Haikou, August 2007, pp. 106–111, https://doi.org/10.1109/ICNC.2007.187
- 71. Z. J. Rashid and M. F. Adak, “Test Data Generation for Dynamic Unit Test in Java Language using Genetic Algorithm,” in 2021 6th International Conference on Computer Science and Engineering (UBMK), Ankara, September 2021, http://dx.doi.org/10.1109/ UBMK52708. 2021.9558953
- 72. R. Gerlich and C. R. Prause, “Optimizing the Parameters of an Evolutionary Algorithm for Fuzzing and Test Data Generation,” in 2020 IEEE 13th International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Porto, October 2020, https://doi.org/10.1109/ICSTW50294.2020.00061
- 73. H. Sharifipour, M. Shakeri, and H. Haghighi, “Structural test data generation using a memetic ant colony optimization based on evolution strategies,” in Swarm Evol. Comput., vol. 40, 2018, pp. 76-91, https://doi.org/10.1016/j.swevo.2017.12.009
- 74. R. J. Cajica; R. E. G. Torres, and P. M. Álvarez, “Automatic Generation of Test Cases from Formal Specifications using Mutation Testing,” in 18th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, November 2021, http://dx.doi.org/10.1109/CCE53527.2021.9633118
- 75. H. Cui, L. Chen, B. Zhu, and H. Kuang, “An efficient automated test data generation method,” in 2010 International Conference on Mea-suring Technology and Mechatronics Automation (ICMTMA), Changsha, March 2010, https://doi.org/10.1109/ICMTMA.2010.556
- 76. M. Olsthoorn, A. van Deursen, and A. Panichella, “Generating Highly-structured Input Data by Combining Search-based Testing and Grammar-based Fuzzing,” in ASE '20: Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, December 2020, pp. 1224-1228, http://dx.doi.org/ 10.1145/3324884.3418930
- 77. J. Castelein, M. Aniche, M. Soltani, A. Panichella, and A. Van Deursen, “Search-based test data generation for SQL queries,” in Proceedings of the 40th International Conference on Software Engineering, Gothenburg, May 2018, pp. 1220–1230, https://doi.org/ 10.1145/3180155.3180202
- 78. S. Ali, M. Zohaib Iqbal, A. Arcuri, and L. C. Briand, “Generating test data from OCL constraints with search techniques,” in IEEE Transactions on Software Engineering, vol. 39, no. 10, 2013, pp. 1376–1402, https://doi.org/10.1109/TSE.2013.17
- 79. G. Soltana, M. Sabetzadeh, and L. C. Briand, “Synthetic data generation for statistical testing,” in 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), Urbana, October 2017, https://doi.org/10.1109/ASE.2017.8115698
- 80. F. Y. B. Daragh and S. Malek, “Deep GUI: Black-box GUI Input Generation with Deep Learning,” in 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), Melbourne, November 2021, pp. 905–916, https://doi.org/10.1109/ ASE51524.2021.9678778
- 81. X. Guo, H. Okamura, and T. Dohi , “Automated Software Test Data Generation With Generative Adversarial Networks,” in IEEE Access, vol. 10, 2022, https://doi.org/10.1109/ACCESS. 2022.3153347
- 82. M. Utting, B. Legeard,F. Dadeau, F. Tamagnan, and F. Bouquet, “Identifying and Generating Missing Tests using Machine Learning on Execution Traces,” in 2020 IEEE International Conference On Artificial Intelligence Testing (AITest), Oxford, August 2020, https://doi.org/10.1109/AITEST49225.2020.00020
- 83. K. Cheng, G. Du, T. Wu, L. Chen, and G. Shi, “Automated Vulnerable Codes Mutation through Deep Learning for Variability Detection,” in 2022 International Joint Conference on Neural Networks (IJCNN), Padua, July 2022, https://doi.org/10.1109/ IJCNN55064.2022.9892444
- 84. Vineeta, A. Singhal, and A. Bansal, “Generation of test oracles using neural network and decision tree model,” in 2014 5th Int. Conf. - Conflu. Next Gener. Inf. Technol. Summit, Noida, September 2014, pp. 313–318, https://doi.org/10.1109/CONFLUENCE.2014.6949311
- 85. J. Zhang, L. Zhang, M. Harman, D. Hao, Y. Jia,and L. Zhang, “Predictive Mutation Testing,” in IEEE Transactions on Software Engineering, vol. 45, no. 9, 2019, pp. 898–918, https://doi.org/ 10.1109/TSE.2018.2809496
- 86. T. Potuzak and R. Lipka, “Generation of Benchmark of Software Testing Methods for Java with Realistic Introduced Errors” in FedCSIS 2023 communication papers, September 2023, to be published
Uwagi
1. Thematic Tracks Regular Papers
2. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-ed1b9456-cce4-4a91-acef-9f1021642826