PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Inverse problems in data analysis

Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
PL
Problemy odwrotne w analizie danych
Języki publikacji
EN
Abstrakty
EN
It is shown that learning from data modelled as minimization of error functionals can be reformulated in terms of inverse problems. This reformulation allows to characterize optimal input-output functions of networks with kernel units.
PL
W artykule tym wykazano, że uczenie na podstawie danych modelowane jako minimalizacja funkcjonałów błędu może być sformułowane jako problem odwrotny. Takie sformułowanie pozwala scharakteryzować optymalne funkcje wejścia/wyjścia sieci z elementami jądrowymi.
Rocznik
Strony
41--48
Opis fizyczny
Bibliogr. 33 poz.
Twórcy
autor
Bibliografia
  • [1] Aizerman M. A., Braverman E. M., Rozonoer L. I. (1964). Theoretical foundations of potential function method in pattern recognition learning. Automation and Remote Control 28, 821-837.
  • [2] Aronszajn N. (1950). Theory of reproducing kernels. Transactions of AMS 68, 33-404.
  • [3] Bertero M. (1989). Linear inverse and ill-posed problems. Advances in Electronics and Electron Physics 75, 1 120.
  • [4] Bjorck A. (1996). Numerical Methods for Least Squares Problem. SIAM.
  • [5] Boser B., Guyou I., Vapnik V. N. (1992). A training algorithm for optimal margin clasifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory (Ed. D. Haussler) (pp. 144-152). ACM Press.
  • [6] Cortes C., Vapnik V. N. (1995). Support-vector networks. Machine Learning 20. 273-297.
  • [7] Cristianini N., Shawe-Taylor J. (2000). An Introduction to Support Vector Machines. Cambridge: Cambridge University Press.
  • [8] Cucker F., Smale S. (2001). On the mathematical foundations of learning. Bulletin of the AMS 39, 1-49.
  • [9] De Vito E., Rosasco L., Caponnetto A., De Giovannini U., Odone F. (2005). Learning from examples as an inverse problem. Journal of Machine Learning Research 6, 883-904.
  • [10] Dounford N, Schwartz J. T. (1963). Linear Operators. Part II: Spectral Theory. New York: Interscience Publishers.
  • [11] Engl H.W., Hanke M., Neubauer A. (2000). Regularization of Inverse Problems. Dordrecht: Kluwer.
  • [12] Friedman A. (1982). Modern Analysis. New York: Dover.
  • [13] Girosi F. (1998). An equivalence between sparse approximation and support vector machines. Neural Computation 10, 1455-1480 (AI Memo No 1606, MIT).
  • [14] Girosi F., Jones M., Poggio T. (1995). Regularization theory and neural network architectures. Neural Computation 7, 219-269.
  • [15] Groetch C. W. (1977). Generalized Inverses of Linear Operators. Dekker: New York.
  • [16] Hansen P. C. (1998). Rank-Deficient and Discrete III-Posed Problems. Philadelphia: SIAM.
  • [17] Kurkova V. (2004). Supervised learning as an inverse problem. Research Report ICS-2004-960, Institute of Computer Science, Prague.
  • [18] Kurkova V. (2004). Learning from data as an inverse problem. In COMPSTAT 2004 - Proceedings on Computational Statistics (J. Antoch, Ed.), 1377-1384. Heidelberg: Physica-Verlag/Springer.
  • [19] Kurkova V. (2005). Neural network learning as an inverse problem. Logic Journal of IGPL 13, 551-559.
  • [20] Kurkova V., Sanguineti M. (2005). Error estimates for approximate optimization by the extended Ritz method. SIAM Journal on Optimization 15, 461-487.
  • [21] Kurkova V., Sanguineti M. (2005). Learning with generalization capability by kernel methods with bounded complexity. Journal of Complexity 13, 551-559.
  • [22] Moore E. H. (1920). Abstract. Bull. Amer. Math. Soc. 26, 394-395.
  • [23] Parzen E. (1966). An approach to time series' analysis. Annals of Math. Statistics 32, 951-989.
  • [24] Penrose R. (1955). A generalized inverse for matrices. Proc. Cambridge Philos. Soc. 51, 406-413.
  • [25] Pinkus A. (1985). n- Width in Approximation Theory. Berlin: Springer-Verlag.
  • [26] Poggio T., Girosi F. (1990). Networks for approximation and learning. Proceedings IEEE 78, 1481 - 1497.
  • [27] Poggio T., Smale S. (2003). The mathematics of learning: dealing with data. Notices of the AMS 50, 536-544.
  • [28] Popper K. (1968). The Logic of Scientific Discovery. New York: Harper Torch Book.
  • [29] Scholkopf B., Smola A. J. (2002). Learning with Kernels - Support Vector Machines, Regularization, Optimization and Beyond. Cambridge: MIT Press.
  • [30] Tikhonov A. N., Arsenin V. Y. (1977). Solutions of III-posed Problems. Washington, D.C.: W.H. Winston.
  • [31] Vapnik V. N. (1995). The Nature of Statistical Learning Theory. New York: Springer-Verlag.
  • [32] Wahba G. (1990). Splines Models for Observational Data. Philadelphia: SIAM.
  • [33] Werbos P. J. (1995). Backpropagation: Basics and New Developments. In The Handbook of Brain Theory and Neural Networks (Ed. Arbib M.) (pp. 134-139). Cambridge: MIT Press.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BAR0-0015-0010
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.