PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Uplift Modeling in Direct Marketing

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Marketing campaigns directed to randomly selected customers often generate huge costs and a weak response. Moreover, such campaigns tend to unnecessarily annoy customers and make them less likely to answer to future communications. Precise targeting of marketing actions can potentially results in a greater return on investment. Usually, response models are used to select good targets. They aim at achieving high prediction accuracy for the probability of purchase based on a sample of customers, to whom a pilot campaign has been sent. However, to separate the impact of the action from other stimuli and spontaneous purchases we should model not the response probabilities themselves, but instead, the change in those probabilities caused by the action. The problem of predicting this change is known as uplift modeling, differential response analysis, or true lift modeling. In this work, tree-based classifiers designed for uplift modeling are applied to real marketing data and compared with traditional response models, and other uplift modeling techniques described in literature. The experiments show that the proposed approaches outperform existing uplift modeling algorithms and demonstrate significant advantages of uplift modeling over traditional, response based targeting.
Rocznik
Tom
Strony
43--50
Opis fizyczny
Bibliogr. 19 poz., rys., tab.
Twórcy
Bibliografia
  • [1] B. Hansotia and B. Rukstales, “Incremental value modeling”, J. Interactive Marketing, vol. 16, no. 3, pp. 35–46, 2002.
  • [2] N. J. Radcliffe and R. Simpson, “Identifying who can be saved and who will be driven away by retention activity”, White paper, Stochastic Solutions Limited, 2007.
  • [3] N. J. Radcliffe and P. D. Surry, “Differential response analysis: modeling true response by isolating the effect of a single action”, in Proc. Credit Scoring Credit Control VI, Edinburgh, Scotland, 1999.
  • [4] N. J. Radcliffe and P. D. Surry, “Real-world uplift modeling with significance-based uplift trees”, Portrait Tech. Rep. TR-2011-1, Stochastic Solutions, 2011.
  • [5] K. Hillstrom, “The MineThatData e-mail analytics and data mining challenge”, MineThatData blog, 2008 [Online]. Available: http://blog.minethatdata.com/2008/03/minethatdata-e-mail-analytics-and-data.html, retrieved on 02.04.2012.
  • [6] P. Rzepakowski and S. Jaroszewicz, “Decision trees for uplift modeling”, in Proc. 10th IEEE Int. Conf. Data Mining ICDM-2010, Sydney, Australia, Dec. 2010, pp. 441–450.
  • [7] P. Rzepakowski and S. Jaroszewicz, “Decision trees for uplift modeling with single and multiple treatments”, Knowledge and Information Systems, pp. 1–25, 2011 [Online]. Available: http://www.springerlink.com/content/f45pw0171234524j
  • [8] C. Manahan, “A proportional hazards approach to campaign list selection”, in Proc. Thirtieth Ann. SAS Users Group Int. Conf. SUGI, Philadelphia, PA, 2005.
  • [9] D. M. Chickering and D. Heckerman, “A decision theoretic approach to targeted advertising”, in Proc. 16th Conf. Uncertainty in Artif. Intell. UAI-2000, Stanford, CA, 2000, pp. 82–88.
  • [10] V. S. Y. Lo, “The true lift model – a novel data mining approach to response modeling in database marketing”, SIGKDD Explor., vol. 4, no. 2, pp. 78–86, 2002.
  • [11] J. R. Quinlan, C4.5: Programs for Machine Learning. Morgan Kauffman, 1992.
  • [12] I. Csisz´ar and P. Shields, “Information theory and statistics: A tutorial”, Found. Trends in Commun. Inform. Theory, vol. 1, no. 4, pp. 417–528, 2004.
  • [13] L. Lee, “Measures of distributional similarity”, in Proc. 37th Ann. Meet. Associ. Computat. Linguistics ACL-1999, Maryland, USA, 1999, pp. 25–32.
  • [14] T. S. Han and K. Kobayashi, Mathematics of information and coding. Boston, USA: American Mathematical Society, 2001.
  • [15] S. Jaroszewicz and D. A. Simovici, “A general measure of rule interestingness”, in Proc. 5th Eur. Conf. Princ. Data Mining Knowl. Discov. PKDD-2001, Freiburg, Germany, 2001, pp. 253–265.
  • [16] L. Brieman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees. Monterey, USA: Wadsworth Inc., 1984.
  • [17] T. Mitchell, Machine Learning. McGraw Hill, 1997.
  • [18] J. R. Quinlan, “Simplifying decision trees”, Int. J. Man-Machine Studies, vol. 27, no. 3, pp. 221–234, 1987.
  • [19] I. H.Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, 2005.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BATA-0016-0005
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.