PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

A Comparison of Citation Sources for Reference and Citation-Based Search in Systematic Literature Reviews

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Context: In software engineering, snowball sampling has been used as a supplementary and primary search strategy. The current guidelines recommend using Google Scholar (GS) for snowball sampling. However, the use of GS presents several challenges when using it as a source for citations and references. Objective: To compare the effectiveness and usefulness of two leading citation databases (GS and Scopus) for use in snowball sampling search. Method: We relied on a published study that has used snowball sampling as a search strategy and GS as the citation source. We used its primary studies to compute precision and recall for Scopus. Results: In this particular case, Scopus was highly effective with 95% recall and had better precision of 5.1% compared to GS’s 2.8%. Moreover, Scopus found nine additional relevant papers. On average, one would read approximately 15 extra papers in GS than Scopus to identify one additional relevant paper. Furthermore, Scopus supports batch downloading of both citations and papers’ references, has better quality metadata, and does better source filtering. Conclusion: This study suggests that Scopus seems to be more effective and useful for snowball sampling than GS for systematic secondary studies attempting to identify peer-reviewed literature.
Rocznik
Strony
art. no. 220106
Opis fizyczny
Bibliogr. 38 poz., rys., tab.
Twórcy
  • Blekinge Institute of Technology, Sweden
  • Blekinge Institute of Technology, Sweden
Bibliografia
  • 1. B.A. Kitchenham, D. Budgen, and P. Brereton, Evidence-Based Software Engineering and Systematic Reviews . Chapman & Hall/CRC, 2015.
  • 2. J. Krüger, C. Lausberger, I. von Nostitz-Wallwitz, G. Saake, and T. Leich, “Search. review. repeat? an empirical study of threats to replicating SLR searches,” Empir. Softw. Eng. , Vol. 25, No. 1, 2020, pp. 627–677.
  • 3. M. Skoglund and P. Runeson, “Reference-based search strategies in systematic reviews,” in 13th International Conference on Evaluation and Assessment in Software Engineering, EASE 2009, Durham University, UK, 20-21 April 2009 , Workshops in Computing, D. Budgen, M. Turner, and M. Niazi, Eds. BCS, 2009, pp. 31 – 40. [Online]. http://ewic.bcs.org/content/ConWebDoc/25022
  • 4. C. Wohlin, “Guidelines for snowballing in systematic literature studies and a replication in software engineering,” in 18th International Conference on Evaluation and Assessment in Software Engineering, EASE ’14 , 2014, pp. 38:1–38:10.
  • 5. J. Bailey, C. Zhang, D. Budgen, M. Turner, and S. Charters, “Search Engine Overlaps : Do they agree or disagree?” in 2nd International Workshop on Realising Evidence-Based Software Engineering, REBSE ’07 , May 2007, pp. 2–2.
  • 6. L. Chen, M.A. Babar, and H. Zhang, “Towards an evidence-based understanding of electronic data sources,” in 14th International Conference on Evaluation and Assessment in Software Engineering, EASE . BCS, 2010, pp. 135–138.
  • 7. A. Yasin, R. Fatima, L. Wen, W. Afzal, M. Azhar et al., “On Using Grey Literature and Google Scholar in Systematic Literature Reviews in Software Engineering,” IEEE Access , Vol. 8, 2020, pp. 36226–36243.
  • 8. N.B. Ali and M. Usman, “A critical appraisal tool for systematic literature reviews in software engineering,” Inf. Softw. Technol. , Vol. 112, 2019, pp. 48–50. [Online]. https://doi.org/10.1016/j.infsof.2019.04.006
  • 9. N.B. Ali and M. Usman, “Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy,” Information and Software Technology , Vol. 99, Jul. 2018, pp. 133–147. [Online]. https://linkinghub.elsevier.com/retrieve/pii/S0950584917304263
  • 10. M. Usman, N.B. Ali, and C. Wohlin, “A quality assessment instrument for systematic literature reviews in software engineering,” CoRR , Vol. abs/2109.10134, 2021. [Online]. https://arxiv.org/abs/2109.10134
  • 11. H.K.V. Tran, J. Börstler, N.B. Ali, and M. Unterkalmsteiner, “How good are my search strings? reflections on using an existing review as a quasi-gold standard,” e Informatica Softw. Eng. J. , Vol. 16, No. 1, 2022. [Online]. https://doi.org/10.37190/e-inf220103
  • 12. P. Singh and K. Singh, “Exploring Automatic Search in Digital Libraries: A Caution Guide for Systematic Reviewers,” in 21st International Conference on Evaluation and Assessment in Software Engineering , EASE’17. New York, NY, USA: ACM, 2017, pp. 236–241. [Online]. http://doi.acm.org/10.1145/3084226.3084275
  • 13. R. Fatima, A. Yasin, L. Liu, and J. Wang, “Google Scholar vs. Dblp vs. Microsoft Academic Search: An Indexing Comparison for Software Engineering Literature,” in 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC) . Madrid, Spain: IEEE, Jul. 2020, pp. 1097–1098. [Online]. https://ieeexplore.ieee.org/document/9202826/
  • 14. T. Dybå, T. Dingsøyr, and G.K. Hanssen, “Applying systematic reviews to diverse study types: An experience report,” in Proceedings of the First International Symposium on Empirical Software Engineering and Measurement, ESEM 2007, September 20-21, 2007, Madrid, Spain . ACM / IEEE Computer Society, 2007, pp. 225–234. [Online]. https://doi.org/10.1109/ESEM.2007.59
  • 15. J.A.M. Santos, A.R. Santos, and M.G. de Mendonça, “Investigating bias in the search phase of software engineering secondary studies,” in 12th Workshop on Experimental Software Engineering , 2015, pp. 488–501.
  • 16. P. Levay, N. Ainsworth, R. Kettle, and A. Morgan, “Identifying evidence for public health guidance: a comparison of citation searching with Web of Science and Google Scholar: Identifying Evidence for Public Health Guidance,” Research Synthesis Methods , Vol. 7, No. 1, Mar. 2016, pp. 34–45.
  • 17. N. Bakkalbasi, K. Bauer, J. Glover, and L. Wang, “Three options for citation tracking: Google Scholar, Scopus and Web of Science,” Biomedical Digital Libraries , Vol. 3, 2006.
  • 18. J. Ortega and I. Aguillo, “Microsoft academic search and google scholar citations: Comparative analysis of author profiles,” Journal of the Association for Information Science and Technology , Vol. 65, No. 6, 2014, pp. 1149–1156.
  • 19. M. Gusenbauer, “Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases,” Scientometrics , Vol. 118, No. 1, 2019, pp. 177–214.
  • 20. A. Martín-Martín, M. Thelwall, E. Orduña-Malea, and E.D. López-Cózar, “Google scholar, microsoft academic, scopus, dimensions, web of science, and opencitations’ COCI: a multidisciplinary comparison of coverage via citations,” Scientometrics , Vol. 126, No. 1, 2021, pp. 871–906. [Online]. https://doi.org/10.1007/s11192-020-03690-4
  • 21. M. Levine-Clark and E. Gil, “A new comparative citation analysis: Google Scholar, Microsoft Academic, Scopus, and Web of Science,” Journal of Business and Finance Librarianship , Vol. 26, No. 1-2, 2021, pp. 145–163.
  • 22. H.F. Moed, J. Bar-Ilan, and G. Halevi, “A new methodology for comparing Google Scholar and Scopus,” Journal of Informetrics , Vol. 10, No. 2, May 2016, pp. 533–551. [Online]. https://www.sciencedirect.com/science/article/pii/S1751157715302285
  • 23. N.B. Ali, E. Engström, M. Taromirad, M.R. Mousavi, N.M. Minhas et al., “On the search for industry-relevant regression testing research,” Empirical Software Engineering , Vol. 24, No. 4, 2019, pp. 2020–2055.
  • 24. Z. Yu and T. Menzies, “FAST : An intelligent assistant for finding relevant papers,” Expert Syst. Appl. , Vol. 120, 2019, pp. 57–71. [Online]. https://doi.org/10.1016/j.eswa.2018.11.021
  • 25. F.D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS quarterly , 1989, pp. 319–340.
  • 26. A. Martín-Martín and E.D. López-Cózar, “Large coverage fluctuations in google scholar: a case study,” CoRR , Vol. abs/2102.07571, 2021. [Online]. https://arxiv.org/abs/2102.07571
  • 27. J.C.F.d. Winter, A.A. Zadpoor, and D. Dodou, “The expansion of Google Scholar versus Web of Science: a longitudinal study,” Scientometrics , Vol. 98, No. 2, Feb. 2014, pp. 1547–1565.
  • 28. E.D. López-Cózar, E. Orduña-Malea, and A. Martín-Martín, “Google scholar as a data source for research assessment,” in Springer Handbook of Science and Technology Indicators , Springer Handbooks, W. Glänzel, H.F. Moed, U. Schmoch, and M. Thelwall, Eds. Springer, 2019, pp. 95–127. [Online]. https://doi.org/10.1007/978-3-030-02511-3\_4
  • 29. G. Halevi, H. Moed, and J. Bar-Ilan, “Suitability of google scholar as a source of scientific information and as a source of data for scientific evaluation—review of the literature,” Journal of informetrics , Vol. 11, No. 3, 2017, pp. 823–834.
  • 30. L. Adriaanse and C. Rensleigh, “Web of science, scopus and google scholar a content comprehensiveness comparison,” Electronic Library , Vol. 31, No. 6, 2013, pp. 727–744.
  • 31. J.P. Ioannidis, K.W. Boyack, and J. Baas, “Updated science-wide author databases of standardized citation indicators,” PLoS Biology , Vol. 18, No. 10, 2020, p. e3000918.
  • 32. K. Petersen and N.B. Ali, “An analysis of top author citations in software engineering and a comparison with other fields,” Scientometrics , Vol. 126, No. 11, 2021, pp. 9147–9183. [Online]. https://doi.org/10.1007/s11192-021-04144-1
  • 33. I. Aguillo, “Is Google Scholar useful for bibliometrics? A webometric analysis,” Scientometrics , Vol. 91, No. 2, 2012, pp. 343–351.
  • 34. V. Garousi, M. Felderer, and M.V. Mäntylä, “Guidelines for including grey literature and conducting multivocal literature reviews in software engineering,” Inf. Softw. Technol. , Vol. 106, 2019, pp. 101–121. [Online]. https://doi.org/10.1016/j.infsof.2018.09.006
  • 35. N.B. Ali, H. Edison, and R. Torkar, “The impact of a proposal for innovation measurement in the software industry,” in ESEM ’20: ACM / IEEE International Symposium on Empirical Software Engineering and Measurement, Bari, Italy, October 5-7, 2020 , M.T. Baldassarre, F. Lanubile, M. Kalinowski, and F. Sarro, Eds. ACM, 2020, pp. 28:1–28:6. [Online]. https://doi.org/10.1145/3382494.3422163
  • 36. N.B. Ali and K. Petersen, “Evaluating strategies for study selection in systematic literature studies,” in 2014 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’14, Torino, Italy, September 18-19, 2014 , M. Morisio, T. Dybå, and M. Torchiano, Eds. ACM, 2014, pp. 45:1–45:4. [Online]. https://doi.org/10.1145/2652524.2652557
  • 37. K. Petersen and N.B. Ali, “Identifying strategies for study selection in systematic reviews and maps,” in Proceedings of the 5th International Symposium on Empirical Software Engineering and Measurement, ESEM 2011, Banff, AB, Canada, September 22-23, 2011 . IEEE Computer Society, 2011, pp. 351–354. [Online]. https://doi.org/10.1109/ESEM.2011.46
  • 38. C. Wohlin, P. Runeson, P.A. da Mota Silveira Neto, E. Engström, I. do Carmo Machado et al., “On the reliability of mapping studies in software engineering,” J. Syst. Softw. , Vol. 86, No. 10, 2013, pp. 2594–2610. [Online]. https://doi.org/10.1016/j.jss.2013.04.076
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-feb561b5-2a77-47c5-a57f-cb9c16a36d76
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.