PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Revisiting the optimal probability estimator from small samples for data mining

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Estimation of probabilities from empirical data samples has drawn close attention in the scientific community and has been identified as a crucial phase in many machine learning and knowledge discovery research projects and applications. In addition to trivial and straightforward estimation with relative frequency, more elaborated probability estimation methods from small samples were proposed and applied in practice (e.g., Laplace’s rule, the m-estimate). Piegat and Landowski (2012) proposed a novel probability estimation method from small samples Eph√2 that is optimal according to the mean absolute error of the estimation result. In this paper we show that, even though the articulation of Piegat’s formula seems different, it is in fact a special case of the m-estimate, where pa = 1/2 and m = √2. In the context of an experimental framework, we present an in-depth analysis of several probability estimation methods with respect to their mean absolute errors and demonstrate their potential advantages and disadvantages. We extend the analysis from single instance samples to samples with a moderate number of instances. We define small samples for the purpose of estimating probabilities as samples containing either less than four successes or less than four failures and justify the definition by analysing probability estimation errors on various sample sizes.
Rocznik
Strony
783--796
Opis fizyczny
Bibliogr. 36 poz., rys., tab., wykr.
Twórcy
  • Department of Knowledge Technologies, Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia; Temida d.o.o., Dunajska cesta 51, 1000 Ljubljana, Slovenia
Bibliografia
  • [1] Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, Springer, New York, NY.
  • [2] Bouguila, N. (2013). On the smoothing of multinomial estimates using Liouville mixture models and applications, Pattern Analysis and Applications 16(3): 349–363.
  • [3] Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J. (1984). Classification and Regression Trees, Wadsworth, Belmont.
  • [4] Calvo, B. and Santafé, G. (2016). SCMAMP: Statistical comparison of multiple algorithms in multiple problems, The R Journal 8(1): 248–256.
  • [5] Cestnik, B. (1990). Estimating probabilities: A crucial task in machine learning, Proceedings of the 9th European Conference on Artificial Intelligence, London, UK, pp. 147–149.
  • [6] Cestnik, B. (2018). Experimental framework in R for experimenting with probability estimations from small samples, https://github.com/BojanCestnik/probability-estimation.R.
  • [7] Cestnik, B. and Bratko, I. (1991). On estimating probabilities in tree pruning, Proceedings of the European Working Session on Learning, Porto, Portugal, pp. 138–150.
  • [8] Chan, J.C.C. and Kroese, D.P. (2011). Rare-event probability estimation with conditional Monte Carlo, Annals of Operations Research 189(1): 43–61.
  • [9] Chandra, B. and Gupta, M. (2011). Robust approach for estimating probabilities in naïve-Bayes classifier for gene expression data, Expert Systems with Applications 38(3): 1293–1298.
  • [10] DasGupta, A. (2011). Probability for Statistics and Machine Learning: Fundamentals and Advanced Topics, Springer, New York, NY.
  • [11] DeGroot, M. and Schervish, M. (2012). Probability and Statistics, Addison-Wesley, Boston, MA.
  • [12] Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research 7(1): 1–30.
  • [13] Domingos, P. and Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss, Machine Learning 29(2): 103–130.
  • [14] Džeroski, S., Cestnik, B. and Petrovski, I. (1993). Using the m-estimate in rule induction, Journal of Computing and Information Technology 1(1): 37–46.
  • [15] Feller, W. (1968). An Introduction to Probability Theory and Its Applications, Willey, Hoboken, NJ.
  • [16] Fienberg, S.E. and Holland, P.W. (1972). On the choice of flattening constants for estimating multinomial probabilities, Journal of Multivariate Analysis 2(1): 127–134.
  • [17] Flach, P. (2012). Machine Learning: The Art and Science of Algorithms that Make Sense of Data, Cambridge University Press, New York, NY.
  • [18] Fürnkranz, J. and Flach, P.A. (2005). ROC ‘n’ rule learning—towards a better understanding of covering algorithms, Machine Learning 58(1): 39–77.
  • [19] García, S., Fernández, A., Luengo, J. and Herrera, F. (2010). Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power, Information Sciences 180(10): 2044–2064.
  • [20] García, S. and Herrera, F. (2008). An extension on statistical comparisons of classifiers over multiple data sets for all pairwise comparisons, Journal of Machine Learning Research 9(12): 2677–2694.
  • [21] Good, I.J. (1965). The Estimation of Probabilities: An Essay on Modern Bayesian Methods, MIT Press, Cambridge, MA.
  • [22] Good, I.J. (1966). How to estimate probabilities, IMA Journal of Applied Mathematics 2(4): 364–383.
  • [23] Good, P. and Hardin, J. (2012). Common Errors in Statistics (and How to Avoid Them), Wiley, Hoboken, NJ.
  • [24] Grover, J. (2012). Strategic Economic Decision-Making: Using Bayesian Belief Networks to Solve Complex Problems, Springer New York, NY.
  • [25] Gudder, S. (1988). Quantum Probability, Academic Press, Boston, MA.
  • [26] Laplace, P.-S. (1814). Essai philosophique sur les probabilités, Courcier, Paris.
  • [27] Larose, D. (2010). Discovering Statistics, W.H. Freeman, New York, NY.
  • [28] Mitchell, T.M. (1997). Machine Learning, McGrawHill, Maidenhead.
  • [29] Piegat, A. and Landowski, M. (2012). Optimal estimator of hypothesis probability for data mining problems with small samples, International Journal of Applied Mathematics and Computer Science 22(3): 629–645, DOI: 10.2478/v10006-012-0048-z.
  • [30] Piegat, A. and Landowski,M. (2013). Mean square error optimal completeness estimator eph2 of probability, Journal of Theoretical and Applied Computer Science 7(3): 3–20.
  • [31] Piegat, A. and Landowski, M. (2014). Specialized, MSE-optimal m-estimators of the rule probability especially suitable for machine learning, Control and Cybernetics 43(1): 133–160.
  • [32] R Core Team (2018). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, https://www.R-project.org/.
  • [33] Rudas, T. (2008). Handbook of Probability: Theory and Applications, SAGE Publications, Thousand Oaks, CA.
  • [34] Starbird, M. (2006). What Are the Chances? Probability Made Clear, Chantilly, VA.
  • [35] Sulzmann, J.N. and Fürnkranz, J. (2009). An empirical comparison of probability estimation techniques for probabilistic rules, in J. Gama et al. (Eds), Discovery Science, Springer, Heidelberg, pp. 317–331.
  • [36] Webb, J. (2007). Game Theory: Decisions, Interaction and Evolution, Springer, London.
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e4f0cd91-42f4-4a88-ae2a-98e9f5a016cd
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.