Tytuł artykułu
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Objective: This study has two main aims. (1) To generate multiple-choice questions (MCQs) using template-based automatic item generation (AIG) in Polish and to evaluate the appropriateness of these MCQs in terms of assessing clinical reasoning skills in medical education; (2) to present a method for using artificial intelligence (AI) to generate new item models based on existing models for template-based AIG in medical education. Methods: This was a methodological study. For the first aim, we followed Gierl’s three- -step template-based AIG method to generate MCQ items in Polish. The quality of the generated MCQs were evaluated by two experts using a structured form. For the second aim, we proposed a four-step process for using a parent template in English to transform it into new templates. We implemented this method in ChatGPT and Claude by using two medical MCQ item models. Results: Both experts found the automatically generated Polish questions clear, clinically sound, and suitable for assessing clinical reasoning. Regarding the template transformation, our findings showed that ChatGPT and Claude are able to transform item models into new models. Conclusions: We demonstrated the successful implementation of template-based AIG in Polish for generating case-based MCQs to assess clinical reasoning skills in medical education. We also presented an AI-based method to transform item models for enhancing diversity in template-based AIG. Future research should integrate AI-generated models into AIG, evaluate their exam performance, and explore their use in various fields.
Czasopismo
Rocznik
Tom
Strony
81--89
Opis fizyczny
Bibliogr. 20 poz., tab.
Twórcy
autor
- Department of Medical Education and Informatics, Faculty of Medicine, Gazi Üniversitesi Hastanesi; E Blok 9. Kat 06500 Beşevler, Ankara, Turkey
- Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Kraków, Poland
autor
- Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Kraków, Poland
autor
- Department of Medical Education, Jagiellonian University Medical College, Kraków, Poland
Bibliografia
- 1. Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, et al. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Acad Med. 2019;94(6):902-12.
- 2. Pugh D, De Champlain A, Touchie C. Plus ça change, plus c’est pareil: Making a continued case for the use of MCQs in medical education. Med Teach. 2019;41(5):569-77.
- 3. Gierl MJ, Lai H, Tanygin V. Advanced Methods in Automatic Item Generation. 1st edition. New York: Routledge; 2021.
- 4. Cheung BHH, Lau GKK, Wong GTC, Lee EYP, Kulkarni D, Seow CS, et al. ChatGPT versus human in generating medical graduate exam multiple choice questions - A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom). PLoS ONE. 2023;18(8):e0290691.
- 5. Coşkun Ö, Kıyak YS, Budakoğlu Iİ. ChatGPT to generate clinical vignettes for teaching and multiple-choice questions for assessment: A randomized controlled experiment. Med Teach. 2024;13:1-7.
- 6. Kıyak YS, Coşkun Ö, Budakoğlu Iİ, Uluoğlu C. ChatGPT for generating multiple-choice questions: Evidence on the use of artificial intelligence in automatic item generation for a rational pharmacotherapy exam. Eur J Clin Pharmacol. 2024;80(5):729-35.
- 7. Kıyak YS, Kononowicz AA. Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation. Med Teach. 2024;48(6):1018-20.
- 8. Laupichler MC, Rother JF, Grunwald Kadow IC, Ahmadi S, Raupach T. Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions. Acad Med. 2023;99(5):508-12. https://doi.org/10.1097/ACM.0000000000005626.
- 9. Zuckerman M, Flood R, Tan RJB, Kelp N, Ecker DJ, Menke J, et al. ChatGPT for assessment writing. Med Teach. 2023;45(11):1224-7.
- 10. Fors UGH, Muntean V, Botezatu M, Zary N. Cross-cultural use and development of virtual patients. Med. Teach. 2009;31(8):732-8.
- 11. Mayer A, Da Silva Domingues V, Hege I, Kononowicz AA, Larrosa M, Martínez-Jarreta B, et al. Planning a Collection of Virtual Patients to Train Clinical Reasoning: A Blueprint Representative of the European Population. IJERPH. 2022;19(10):6175.
- 12. Kıyak YS, Budakoğlu Iİ, Coşkun Ö, Koyun E. The First Automatic Item Generation in Turkish for Assessment of Clinical Reasoning in Medical Education. Tıp Eğitimi Dünyası. 2023;22(66):72-90.
- 13. Kıyak YS, Coşkun Ö, Budakoğlu Iİ, Uluoğlu C. Psychometric Analysis of the First Turkish Multiple-Choice Questions Generated Using Automatic Item Generation Method in Medical Education. Tıp Eğitimi Dünyası. 2023;22(68):154-61.
- 14. Sayin A, Gierl M. Using OpenAI GPT to Generate Reading Comprehension Items. Educ. Meas. 2024;43(1):5-18.
- 15. Gierl MJ, Lai H, Turner SR. Using automatic item generation to create multiple-choice test items. Med Educ. 2012;46(8):757-65.
- 16. Leo J, Kurdi G, Matentzoglu N, Parsia B, Sattler U, Forge S, et al. Ontology-Based Generation of Medical, Multi-term MCQs. Int J Artif Intell Educ. 2019;29:145-88.
- 17. Pugh D, De Champlain A, Gierl M, Lai H, Touchie C. Can automated item generation be used to develop high quality MCQs that assess application of knowledge? RPTEL. 2020;15:12.
- 18. Masters K, Benjamin J, Agrawal A, MacNeill H, Pillow MT, Mehta N. Twelve tips on creating and using custom GPTs to enhance health professions education. Med Teach. 2024;46(6):752-56.
- 19. Fisher AD, Fisher G. Evaluating performance of custom GPT in anesthesia practice. J. Clin. Anesth. 2024;93:111371.
- 20. Agarwal M, Goswami A, Sharma P. Evaluating ChatGPT-3.5 and Claude-2 in Answering and Explaining Conceptual Medical Physiology Multiple-Choice Questions. Cureus. 2023; 15(9):e46222. https://doi.org/10.7759/ cureus.46222.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-d5ee3473-b9fb-4a2e-a0bf-84982e083d34