Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  interpretowalne uczenie maszynowe
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The use of machine learning (ML) models for streamflow forecasting has recently proved highly successful. However, ML is typically criticized for a lack of interpretability. Here, we develop an interpretable ML model for 1-month-ahead streamflow forecasting using extreme gradient boosting (XGBoost) and Shapley additive explanations (SHAP). In addition to a performance evaluation of XGBoost compared to regression tree and random forest approaches, the effects of input variables, including local weather, streamflow lag, and global climate, on streamflow were interpreted in terms of SHAP total effect values, main effect values, interaction values, and loss values. The experimental results at two catchments in the contiguous USA are significant in four ways. First, XGBoost was superior to the other two models in terms of Nash–Sutclife efficiency, mean absolute error, root mean square error, and correlation coefficient. Second, by aggregating SHAP values, we found that the contributions of these variables to streamflow differed according to the investigated local perspectives, including streamflow at different months, low streamflow, medium streamflow, high streamflow, and peak streamflow. Third, the SHAP main effect and interaction values revealed that nonmonotonic relationships may occur between the input variables and streamflow, and the strength of variable interaction effects might be related to the variable values rather than their correlations. Fourth, variable drifts in the testing set were deduced from SHAP loss values. These findings exhibit positive significance for understanding ML for monthly streamflow forecasting.
EN
The techniques of explainability and interpretability are not alternatives for many realworld problems, as recent studies often suggest. Interpretable machine learning is nota subset of explainable artificial intelligence or vice versa. While the former aims to build glass-box predictive models, the latter seeks to understand a black box using an explanatory model, a surrogate model, an attribution approach, relevance importance, or other statistics. There is concern that definitions, approaches, and methods do not match, leading to the inconsistent classification of deep learning systems and models for interpretation and explanation. In this paper, we attempt to systematically evaluate and classify the various basic methods of interpretability and explainability used in the field of deep learning.One goal of this paper is to provide specific definitions for interpretability and explainability in Deep Learning. Another goal is to spell out the various research methods for interpretability and explainability through the lens of the literature to create a systematic classifier for interpretability and explainability in deep learning. We present a classifier that summarizes the basic techniques and methods of explainability and interpretability models. The evaluation of the classifier provides insights into the challenges of developinga complete and unified deep learning framework for interpretability and explainability concepts, approaches, and techniques.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.