PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Evaluating the forecasting capabilities of probabilistic and point-based LSTM models in sequence prediction

Treść / Zawartość
Identyfikatory
Warianty tytułu
PL
Ocena jakości prognostycznych modeli probabilistycznych i punktowych LSTM
Języki publikacji
EN
Abstrakty
EN
his paper compares the performance of probabilistic and non-probabilistic LSTM models in the task of univariate, real valued sequence forecasting. The performance of models is evaluated in terms of mean absolute error and root mean squared error for different forecasting horizons. The results show that probabilistic models can outperform non-probabilistic models in the task of forecasting.
PL
W artykule porównano wydajność probabilistycznych i nieprobabilistycznych modeli LSTM w zadaniu prognozowania szeregów czasowych. Wydajność modeli jest oceniana pod względem średniego błędu bezwzględnego i błędu średniokwadratowego dla różnych horyzontów prognozy. Wyniki pokazują, że modele probabilistyczne mogą przewyższać modele nieprobabilistyczne w zadaniu prognozowania.
Rocznik
Strony
267--272
Opis fizyczny
Bibliogr. 28 poz., rys., tab.
Twórcy
  • Institute of Theory of Electrical Engineering, Measurement and Information Systems, Faculty of Electrical Engineering, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland
  • Institute of Theory of Electrical Engineering, Measurement and Information Systems, Faculty of Electrical Engineering, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland
Bibliografia
  • [1] J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, no. 2, pp. 179–211, 1990.
  • [2] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, 1994.
  • [3] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 11 1997. [Online]. Available: https://doi.org/10.1162/neco.1997.9.8.1735
  • [4] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” 2014.
  • [5] H. Shi, M. Xu, and R. Li, “Deep learning for household load forecasting - a novel pooling deep rnn,” IEEE Transactions on Smart Grid, vol. 9, no. 5, pp. 5271–5280, 2018.
  • [6] R. Fu, Z. Zhang, and L. Li, “Using lstm and gru neural network methods for traffic flow prediction,” in 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC). IEEE, 2016, pp. 324–328.
  • [7] S. McNally, J. Roche, and S. Caton, “Predicting the price of bitcoin using machine learning,” in 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), 2018, pp. 339–343.
  • [8] S. Siami-Namini, N. Tavakoli, and A. Siami Namin, “A comparison of arima and lstm in forecasting time series,” in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018, pp. 1394–1401.
  • [9] S. Siami-Namini, N. Tavakoli, and A. S. Namin, “The performance of lstm and bilstm in forecasting time series,” in 2019 IEEE International Conference on Big Data (Big Data), 2019, pp. 3285–3292.
  • [10] Y. Yu, J. Cao, and J. Zhu, “An lstm short-term solar irradiance forecasting under complicated weather conditions,” IEEE Access, vol. 7, pp. 145 651–145 666, 2019.
  • [11] N. Ng, R. A. Gabriel, J. McAuley, C. Elkan, and Z. C. Lipton, “Predicting surgery duration with neural heteroscedastic regression,” 2017.
  • [12] V. Flunkert, D. Salinas, and J. Gasthaus, “Deepar: Probabilistic forecasting with autoregressive recurrent networks,” CoRR, vol. abs/1704.04110, 2017. [Online]. Available: http://arxiv.org/abs/1704.04110
  • [13] Y. Zhang, J. Wang, and X. Wang, “Review on probabilistic forecasting of wind power generation,” Renewable and Sustainable Energy Reviews, vol. 32, pp. 255–270, 2014.
  • [14] G. Bontempi, “Long term time series prediction with multi-input multi-output local learning,” Proceedings of the 2nd European Symposium on Time Series Prediction (TSP), ESTSP08, 01 2008.
  • [15] A. Graves, “Generating sequences with recurrent neural networks,” CoRR, vol. abs/1308.0850, 2013. Online]. Available: http://arxiv.org/abs/1308.0850
  • [16] H. Hewamalage, C. Bergmeir, and K. Bandara, “Recurrent neural networks for time series forecasting: Current status and future directions,” International Journal of Forecasting, vol. 37, no. 1, pp. 388–427, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0169207020300996
  • [17] B. Lim and S. Zohren, “Time-series forecasting with deep learning: a survey,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 379, no. 2194, p. 20200209, 2021. [Online]. Available: https://royalsocietypublishing.org/doi/abs/10.1098/ rsta.2020.0209
  • [18] S. B. Taieb, G. Bontempi, A. Atiya, and A. Sorjamaa, “A review and comparison of strategies for multi-step ahead time series forecasting based on the nn5 forecasting competition,” 2011.
  • [19] M. W. Seeger, D. Salinas, and V. Flunkert, “Bayesian intermittent demand forecasting for large inventories,” Advances in Neural Information Processing Systems, vol. 29, 2016.
  • [20] S. Smyl and K. Kuber, “Data preprocessing and augmentation for multiple short time series forecasting with recurrent neural networks,” 07 2016.
  • [21] E. Zivot and J. Wang, Modeling Financial Time Series with S-PLUS®, ser. International Federation for Information Processing. Springer New York, 2007. [Online]. Available: https://books.google.pl/books?id=sxODP2l1mX8C
  • [22] P. Goldberg, C. Williams, and C. Bishop, “Regression with inputdependent noise: A gaussian process treatment,” Advances in Neural Information Processing Systems, vol. 10, 02 1998.
  • [23] V. Akgiray, “Conditional heteroscedasticity in time series of stock returns: Evidence and forecasts,” The Journal of Business, vol. 62, no. 1, pp. 55–80, 1989. [Online]. Available: http://www.jstor.org/stable/2353123
  • [24] B. Whitcher, S. Byers, P. Guttorp, and D. Percival, “Testing for homogeneity of variance in time series: Long memory, wavelets and the nile river,” Water Resour. Res., vol. 38, 06 1999.
  • [25] C. M. Bishop and N. M. Nasrabadi, Pattern recognition and machine learning. Springer, 2006, vol. 4, no. 4.
  • [26] A. Trindade, “ElectricityLoadDiagrams20112014,” UCI Machine Learning Repository, 2015, DOI: https://doi.org/10.24432/C58C86.
  • [27] G. Lai, W.-C. Chang, Y. Yang, and H. Liu, “Modeling long-and short-term temporal patterns with deep neural networks,” 2018.
  • [28] R. Pontius, O. Thontteh, and H. Chen, “Components of information for multiple resolution comparison between maps that share a real variable,” Environmental and Ecological Statistics, vol. 15, pp. 111–142, 06 2008.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki i promocja sportu (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-3612b7db-fc6b-4508-9f4b-d297db315ea8
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.