Powiadomienia systemowe
- Sesja wygasła!
- Sesja wygasła!
Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
In today's technology-driven era, innovative methods for predicting behaviors and patterns are crucial. Virtual Learning Environments (VLEs) represent a rich domain for exploration due to their abundant data and potential for enhancing learning experiences. Long Short-Term Memory (LSTM) models, while proficient with sequential data, face challenges such as overfitting and gradient issues. This study investigates the optimization of LSTM parameters and hyperparameters for VLE prediction. Adaptive gradient-based algorithms, including ADAM, NADAM, ADADELTA, ADAGRAD, and ADAMAX, exhibited superior performance. The LSTM model with ADADELTA achieved 91% accuracy for BBB course data, while ADAGRAD LSTM models attained average accuracies of 80% and 85% for DDD and FFF courses, respectively. Genetic algorithms for hyperparameter optimization significantly contributed, with the GA + LSTM + ADAGRAD model achieving 88% and 87% accuracy in the 7th and 9th models for BBB course data. The GA + LSTM + ADADELTA model produced average accuracy rates of 80% and 84% in DDD and FFF course data, with the highest accuracy rates of 86% and 93%, as well. These findings highlight the effectiveness of adaptive and genetic algorithms in enhancing LSTM model performance for VLE prediction, offering valuable insights for educational technology advancement.
Rocznik
Tom
Strony
21
Opis fizyczny
Bibliogr. 30 poz., rys., tab.
Twórcy
autor
- Universitas Muhammadiyah Riau, Indonesia
autor
- Universiti Kelantan (UMK), Malaysia
autor
- Universiti Teknikal Malaysia Melaka, Malaysia
Bibliografia
- [1] M. M. Mashroofa, A. Haleem, N. Nawaz, and M. A. Saldeen, “E-learning adoption for sustainable higher education,” Heliyon, vol. 9, no. 6, p. e17505, 2023, https://doi.org/10.1016/j.heliyon.2023.e17505.
- [2] Z. Che, C. Peng, and C. Yue, “Optimizing LSTM with multi-strategy improved WOA for robust prediction of high-speed machine tests data,” Chaos Solitons Fractals, vol. 178, p. 114394, 2024, https://doi.org/10.1016/j.chaos.2023.114394.
- [3] D. G. da Silva and A. A. de Moura Meneses, “Comparing Long Short-Term Memory (LSTM) and bidirectional LSTM deep neural networks for power consumption prediction,” Energy Reports, vol. 10, pp. 3315-3334, 2023, https://doi.org/10.1016/j.egyr.2023.09.175.
- [4] O. Dermy, A. Roussanaly, and A. Boyer, “Using Behavioral Primitives to Model Students’ Digital Behavior,” Procedia Comput Sci, vol. 207, pp. 2444-2453, 2022, https://doi.org/10.1016/j.procs.2022.09.302.
- [5] D. Devi and S. Sophia, “GA-CNN: Analyzing student’s cognitive skills with EEG data using a hybrid deep learning approach,” Biomed Signal Process Control, vol. 90, p. 105888, 2024, https://doi.org/10.1016/j.bspc.2023.105888.
- [6] Q. Fu, Z. Gao, J. Zhou, and Y. Zheng, “CLSA: A novel deep learning model for MOOC dropout prediction,” Computers & Electrical Engineering, vol. 94, p. 107315, 2021, https://doi.org/10.1016/j.compeleceng.2021.107315.
- [7] S. Gupta, P. Kumar, and R. Tekchandani, “An optimized deep convolutional neural network for adaptive learning using feature fusion in multimodal data,” Decision Analytics Journal, vol. 8, p. 100277, 2023, https://doi.org/10.1016/j.dajour.2023.100277.
- [8] Y. M. I. Hassan, A. Elkorany, and K. Wassif, “SMFSOP: A semantic-based modelling framework for student outcome prediction,” Journal of King Saud University - Computer and Information Sciences, vol. 35, no. 8, p. 101728, 2023, https://doi.org/10.1016/j.jksuci.2023.101728.
- [9] M. Hlosta, C. Herodotou, T. Papathoma, A. Gillespie, and P. Bergamin, “Predictive learning analytics in online education: A deeper understanding through explaining algorithmic errors,” Computers and Education: Artificial Intelligence, vol. 3, p. 100108, 2022, https://doi.org/10.1016/j.caeai.2022.100108.
- [10] E. ISMANTO, ET AL. [10] P. Kumar and A. S. Hati, “Deep convolutional neural network based on adaptive gradient optimizer for fault detection in SCIM,” ISA Trans, vol. 111, pp. 350-359, 2021, https://doi.org/10.1016/j.isatra.2020.10.052.
- [11] B. Li et al., “Adaptive Gradient-Based Optimization Method for Parameter Identification in Power Distribution Network,” International Transactions on Electrical Energy Systems, vol. 2022, p. 9300522, 2022, https://doi.org/10.1155/2022/9300522.
- [12] B. Li et al., “A personalized recommendation framework based on MOOC system integrating deep learning and big data,” Computers and Electrical Engineering, vol. 106, p. 108571, 2023, https://doi.org/10.1016/j.compeleceng.2022.108571.
- [13] Q. Li, X. Guan, and J. Liu, “A CNN-LSTM framework for flight delay prediction,” Expert Syst Appl, vol. 227, p. 120287, 2023, https://doi.org/10.1016/j.eswa.2023.120287.
- [14] Y. Lin, S. Feng, F. Lin, J. Xiahou, and W. Zeng, “Multi-scale reinforced profile for personalized recommendation with deep neural networks in MOOCs,” Appl Soft Comput, vol. 148, p. 110905, 2023, https://doi.org/10.1016/j.asoc.2023.110905.
- [15] J. Martinez-Gil, “Optimizing readability using genetic algorithms,” Knowl Based Syst, vol. 284, p. 111273, 2024, https://doi.org/10.1016/j.knosys.2023.111273.
- [16] A. A. Mubarak, H. Cao, and I. M. Hezam, “Deep analytic model for student dropout prediction in massive open online courses,” Computers & Electrical Engineering, vol. 93, p. 107271, 2021, https://doi.org/10.1016/j.compeleceng.2021.107271.
- [17] M. Neghină, A.-I. Dicoiu, R. Chiş, and A. Florea, “A competitive new multi-objective optimization genetic algorithm based on apparent front ranking,” Eng Appl Artif Intell, vol. 132, p. 107870, 2024, https://doi.org/10.1016/j.engappai.2024.107870.
- [18] K. Niu, Y. Zhou, G. Lu, W. Tai, and K. Zhang, “PMCT: Parallel Multiscale Convolutional Temporal model for MOOC dropout prediction,” Computers and Electrical Engineering, vol. 112, p. 108989, 2023, https://doi.org/10.1016/j.compeleceng.2023.108989.
- [19] L. Peng, T. Zhang, S. Wang, G. Huang, and S. Chen, “Diffusion adagrad minimum kernel risk sensitive mean p-power loss algorithm,” Signal Processing, vol. 202, p. 108773, 2023, https://doi.org/10.1016/j.sigpro.2022.108773.
- [20] Y.-L. Peng and W.-P. Lee, “Practical guidelines for resolving the loss divergence caused by the root-mean-squared propagation optimizer,” Appl Soft Comput, vol. 153, p. 111335, 2024, https://doi.org/10.1016/j.asoc.2024.111335.
- [21] J. Ren and S. Wu, “Prediction of user temporal interactions with online course platforms using deep learning algorithms,” Computers and Education: Artificial Intelligence, vol. 4, p. 100133, 2023, https://doi.org/10.1016/j.caeai.2023.100133.
- [22] H. Waheed, S.-U. Hassan, R. Nawaz, N. R. Aljohani, G. Chen, and D. Gasevic, “Early prediction of learners at risk in self-paced education: A neural network approach,” Expert Syst Appl, vol. 213, p. 118868, 2023, https://doi.org/10.1016/j.eswa.2022.118868.
- [23] S. Sageengrana, S. Selvakumar, and S. Srinivasan, “Optimized RB-RNN: Development of hybrid deep learning for analyzing student’s behaviours in online-learning using brain waves and chatbots,” Expert Syst Appl, vol. 248, p. 123267, 2024, https://doi.org/10.1016/j.eswa.2024.123267.
- [24] T. Xu, P. Xu, C. Yang, Z. Li, A. Wang, and W. Guo, “An LSTM-stacked autoencoder multisource response prediction and constraint optimization for scaled expansion tubes,” Appl Soft Comput, vol. 153, p. 111285, 2024, https://doi.org/10.1016/j.asoc.2024.111285.
- [25] K. Wang et al., “A novel GA-LSTM-based prediction method of ship energy usage based on the characteristics analysis of operational data,” Energy, vol. 282, p. 128910, 2023, https://doi.org/10.1016/j.energy.2023.128910.
- [26] M. Uppal et al., “Enhancing accuracy in brain stroke detection: Multi-layer perceptron with Adadelta, RMSProp and AdaMax optimizers,” Front Bioeng Biotechnol, vol. 11, no. September, pp. 1-15, 2023, https://doi.org/10.3389/fbioe.2023.1257591.
- [27] A. K. Silivery, R. M. Rao Kovvur, R. Solleti, L. K. S. Kumar, and B. Madhu, “A model for multi-attack classification to improve intrusion detection performance using deep learning approaches,” Measurement: Sensors, vol. 30, p. 100924, 2023, https://doi.org/10.1016/j.measen.2023.100924.
- [28] Z. Zhao, Y. Bao, T. Gao, and Q. An, “Optimization of GFRP-concrete-steel composite column based on genetic algorithm - artificial neural network,” Applied Ocean Research, vol. 143, p. 103881, 2024, https://doi.org/10.1016/j.apor.2024.103881.
- [29] D. Valero-Carreras, J. Alcaraz, and M. Landete, “Comparing two SVM models through different metrics based on the confusion matrix,” Comput Oper Res, vol. 152, p. 106131, 2023, https://doi.org/10.1016/j.cor.2022.106131.
- [30] Shirdel, M., Di Mauro, M., & Liotta, A. “Worthiness Benchmark: A Novel Concept for Analyzing Binary Classification Evaluation Metrics”, Information Sciences, 120882, 2024, https://doi.org/10.1016/j.ins.2024.120882.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-2699e968-d4bd-4d19-b8d6-23412c8b00a1
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.