PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Self-learning scoring models – introduction of an on-line approach to risk assesment

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The problem considered in this article involves the construction of evaluation model, which could subsequently be used in the field of modeling and risk management. The research work is finalized by a construction of a new model on the basis of observations of the models used for risk management and knowledge of information theory, machine learning and artificial neural networks. The developed tools are trained online, using their ability for automatic deduction rules based on data, during model application for evaluation tasks. The model, consequently changes the data analysis stage, limits the scope of the necessary expertise in the area, where the assessment model can be used and, to some extent, the shape of the model becomes independent from the current range of available data. These features increase its ability to generalize and to cope with the data of previously undefined classes, as well as improve its resistance to gaps occurring in the data. Performance of the model presented in this paper is tested and verified on the basis of real-life data, which would resemble a potentially real practical application. Preliminary tests performed within the scope of this work indicate that the developed model can form a starting point for further research as some of the used mechanisms have a fairly high efficiency and flexibility.
Słowa kluczowe
Twórcy
autor
  • Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences (SGGW), Nowoursynowska 159, 02-776 Warsaw, Poland
autor
  • Faculty of Mathematics and Information, Warsaw University of Technology, PL Politechniki 1, 00-661 Warsaw, Poland
Bibliografia
  • 1. Cover T.M., Thomas J.A.. Elements of Information Theory. Wiley, 1991.
  • 2. Pal S.K. and Mitra S. Multi-layer perceptron, fuzzy sets and classification. IEEE Transactions on Neural Networks, 3, 1992, 683-697.
  • 3. Shannon C. A mathematical theory of communication. Bell System Technical Journal, 27(3), 1948, 379-423.
  • 4. Mitchell T. Machine Learning. McGraw-Hill, 1997.
  • 5. Quinlan J.R. Induction of decision trees. Machine Learning, 5(1), 1986, 71-100.
  • 6. H. Shimazaki, Shinomoto S. A method for selecting the bin size of a time histogram. Neural Com¬putation, 19, 2007, 1503-1527.
  • 7. Sturges H.A. The choice of a class interval. Journal of the American Statistical Association, 1926, 65-66.
  • 8. McCulloch W.S., Pitts W.H. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 1943, 115-133.
  • 9. Hassoun M.H. Fundamentals of Artificial Neural Networks. The MIT Press, 1995.
  • 10. Rosenblatt F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan, 1962.
  • 11. Csaji B.C. Approximation with artificial neural networks. http: //citeseerx.ist.psu.edu/viewdoc/downloa d?doi=10.1.1.101.2647&rep=repl&type=pdf, 2001.
  • 12. Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4, 1991, 251-257.
  • 13. Riedmiller M. Advanced supervised learning in multi-layer perceptrons – from backpropagation to adaptive learning algorithms, http://citeseerx.ist. psu. edu/viewdoc/download?doi=10.1.1.27.7876& rep=repl&type=pdf.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e23288c1-5ca6-4ab5-ae38-fc0cafa9489c
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.