PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Candidate Selection and Instance Ordering for Realtime Algorithm Configuration

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Many modern combinatorial solvers have a variety of parameters through which a user can customise their behaviour. Algorithm configuration is the process of selecting good values for these parameters in order to improve performance. Time and again algorithm configuration has been shown to significantly improve the performance of many algorithms for solving challenging computational problems. Automated systems for tuning parameters regularly out-perform human experts, sometimes but orders of magnitude. Online algorithm configurators, such as ReACTR, are able to tune a solver online without incurring costly offline training. As such ReACTR’s main focus is on runtime minimisation while solving combinatorial problems. To do this ReACTR adopts a one-pass methodology where each instance in a stream of instances to be solved is considered only as it arrives. As such ReACTR’s performance is sensitive to the order in which instances arrive. It is still not understood which instance orderings positively or negatively effect the performance of ReACTR. This paper investigates the effect of instance ordering and grouping by empirically evaluating different instance orderings based on difficulty and feature values. Though the end use is generally unable to control the order in which instances arrive it is important to understand which orderings impact Re-ACTR’s performance and to what extent. This study also has practical benefit as such orderings can occur organically. For example as business grows the problems it may encounter, such as routing or scheduling, often grow in size and difficulty. ReACTR’s performance also depends strongly configuration selection procedure used. This component controls which configurations are selected to run in parallel from the internal configuration pool. This paper evaluates various ranking mechanisms and different ways of combining them to better understand how the candidate selection procedure affects realtime algorithm configuration. We show that certain selection procedures are superior to others and that the order which instances arrive in determines which selection procedure performs best. We find that both instance order and grouping can significantly affect the overall solving time of the online automatic algorithm configurator ReACTR. One of the more surprising discoveries is that having groupings of similar instances can actually negatively impact on the overall performance of the configurator. In particular we show that orderings based on nearly any instance feature values can lead to significant reductions in total runtime over random instance orderings. In addition, certain candidate selection procedures are more suited to certain orderings than others and selecting the correct one can show a marked improvement in solving times.
Wydawca
Rocznik
Strony
141--166
Opis fizyczny
Bibliogr. 38 poz., rys., wykr.
Twórcy
  • Insight Centre for Data Analytics, Department of Computer Science, University College Cork, Cork, Ireland
  • Insight Centre for Data Analytics, Department of Computer Science, University College Cork, Cork, Ireland
Bibliografia
  • [1] Hutter F, Hoos HH, Leyton-Brown K. Sequential model-based optimization for general algorithm configuration. In: Proceedings of LION, Springer, 2011 pp. 507-523. doi:10.1007/978-3-642-25566-3_40.
  • [2] Hutter F, Lindauer M, Balint A, Bayless S, Hoos H, Leyton-Brown K. The configurable SAT solver challenge (CSSC). Artificial Intelligence, 2017.243:1-25. URL https://doi.org/10.1016/j.artint.2016.09.006.
  • [3] Bartz-Beielstein T, Lasarczyk CW, Preuß M. Sequential parameter optimization. In: Evolutionary Computation, 2005. The 2005 IEEE Congress on, volume 1. IEEE, 2005 pp. 773-780. doi:10.1109/CEC.2005.1554761.
  • [4] Hutter F, Hoos HH, Leyton-Brown K, Stützle T. ParamI LS: an automatic algorithm configuration framework. Journal of Artificial Intelligence Research, 2009.36(1):267-306. URL http://dl.acm.org/citation.cfm?id=1734953.1734959.
  • [5] Fitzgerald T, Malitsky Y, O’Sullivan B. Reactr: Realtime algorithm configuration through tournament rankings. In: Proceedings of IJCAI. 2015 pp. 304-310. ISBN: 978-1-57735-738-4.
  • [6] Birattari M, Stützle T, Paquete L, Varrentrapp K, et al. A Racing Algorithm for Configuring Metaheuristics. In: GECCO, volume 2. 2002 pp. 11-18. ISBN: 1-55860-878-8.
  • [7] López-Ibáñez M, Dubois-Lacoste J, Cáceres LP, Birattari M, Stützle T. The irace package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives, 2016.3:43-58. URL https://doi.org/10.1016/j.orp.2016.09.002.
  • [8] Ansótegui C, Sellmann M, Tierney K. A gender-based genetic algorithm for the automatic configuration of algorithms. In: Proceedings of CP 2009, pp. 142-157. Springer, 2009. doi:10.1007/978-3-642-04244-7_14.
  • [9] Kadioglu S, Malitsky Y, Sellmann M, Tierney K. ISAC-Instance-Specific Algorithm Configuration. In: ECAI, volume 215. 2010 pp. 751-756. ISBN: 978-1-60750-605-8.
  • [10] Arbelaez A, Hamadi Y, Sebag M. Continuous search in constraint programming. In: Autonomous Search, pp. 219-243. Springer, 2011. doi:10.1007/978-3-642-21434-9_9.
  • [11] Fitzgerald T, Malitsky Y, O’Sullivan B, Tierney K. React: Real-time algorithm configuration through tournaments. In: Proceedings of SOCS. 2014. URL http://www.aaai.org/ocs/index.php/SOCS/SOCS14/paper/view/8910.
  • [12] Degroote H, Bischl B, Kotthoff L, De Causmaecker P. Reinforcement Learning for Automatic Online Algorithm Selection-an Empirical Study. In: ITAT 2016 Proceedings, volume 1649. 2016 pp. 93-101. URL http://ceur-ws.org/Vol-1649.
  • [13] Degroote H. Online Algorithm Selection. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017. 2017 pp. 5173-5174. URL https://doi.org/10.24963/ijcai.2017/746.
  • [14] Bengio Y, Louradour J, Collobert R, Weston J. Curriculum learning. In: Proceedings of the 26th annual international conference on machine learning. ACM, 2009 pp. 41-48. doi:10.1145/1553374.1553380.
  • [15] Styles J, Hoos H. Ordered racing protocols for automatically configuring algorithms for scaling performance. In: Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013 pp. 551-558. URL https://dl.acm.org/citation.cfm?id=2463438.
  • [16] Schneider M, Hoos H. Quantifying homogeneity of instance sets for algorithm configuration. Learning and Intelligent Optimization, 2012 pp. 190-204. doi:10.1007/978-3-642-34413-8_14.
  • [17] Malitsky Y, Sabharwal A, Samulowitz H, Sellmann M. Algorithm Portfolios Based on Cost-Sensitive Hierarchical Clustering. In: IJCAI, volume 13. 2013 pp. 608-614. ISBN: 978-1-57735-633-2.
  • [18] Hurley B, Kotthoff L, Malitsky Y, O’Sullivan B. Proteus: A Hierarchical Portfolio of Solvers and Transformations. In: Proceedings of CPAIOR. 2014 pp. 301-317. doi:10.1007/978-3-319-07046-9_22.
  • [19] Xu L, Hutter F, Hoos HH, Leyton-Brown K. SATzilla: portfolio-based algorithm selection for SAT. Journal of Artificial Intelligence Research, 2008. pp. 565-606. URL https://doi.org/10.1613/jair.2490.
  • [20] Herbrich R, Minka T, Graepel T. Trueskill: A Bayesian skill rating system. In: Advances in neural information processing systems. 2006 pp. 569-576. URL https://papers.nips.cc/paper/3079-trueskilltm-a-bayesian-skill-rating-system.
  • [21] Vermorel J, Mohri M. Multi-armed bandit algorithms and empirical evaluation. In: Proceedings of ECML. Springer, 2005 pp. 437-448. doi:10.1007/11564096_42.
  • [22] Kuleshov V, Precup D. Algorithms for multi-armed bandit problems. arXiv preprint arXiv:1402.6028, 2014. URL https://arxiv.org/abs/1402.6028.
  • [23] Balint A, Belov A, Diepold D, Gerber S, Järvisalo M, Sinz C (eds.). Proceedings of SAT Challenge 2012. 2012. URL http://hdl.handle.net/10138/34218.
  • [24] Bischl B, Kerschke P, Kotthoff L, Lindauer M, Malitsky Y, Fréchette A, Hoos H, Hutter F, Leyton-Brown K, Tierney K, et al. Aslib: A benchmark library for algorithm selection. Artificial Intelligence, 2016.237:41-58. URL https://doi.org/10.1016/j.artint.2016.04.003.
  • [25] Biere A. Lingeling. SAT Race, 2010. URL http://fmv.jku.at/papers/Biere-FMV-TR-10-1.pdf.
  • [26] Hutter F, López-Ibáñez M, Fawcett C, Lindauer M, Hoos HH, Leyton-Brown K, Stützle T. AClib: A benchmark library for algorithm configuration. In: Proceedings of LION. Springer, 2014 pp. 36-40. doi:10.1007/978-3-319-09584-4_4.
  • [27] Leyton-Brown K, Pearson M, Shoham Y. Towards a universal test suite for combinatorial auction algorithms. In: Proceedings ACM-EC. ACM, 2000 pp. 66-76. doi:10.1145/352871.352879.
  • [28] IBM, 2014. IBM ILOG CPLEX Optimization Studio 12.6.1. URL https://www.ibm.com/support/knowledgecenter/SSSA5P_12.6.1/ilog.odms.studio.help/Optimization_Studio/topics/COS_home.html.
  • [29] Xu L, Hutter F, Hoos HH, Leyton-Brown K. SATzilla2009: an automatic algorithm portfolio for SAT. SAT, 2009.4:53-55. URL https://www.cs.ubc.ca/~kevinlb/papers/2009-satzilla09.pdf.
  • [30] Xu L, Hutter F, Hoos H, Leyton-Brown K. Features for SAT. University of British Columbia, Tech. Rep, 2012. URL https://www.cs.ubc.ca/labs/beta/Projects/SATzilla/Report_SAT_features.pdf.
  • [31] Hurley B, Kotthoff L, Malitsky Y, OSullivan B. Proteus: A hierarchical portfolio of solvers and transformations. In: International Conference on AI and OR Techniques in Constriant Programming for Combinatorial Optimization Problems. Springer, 2014 pp. 301-317. doi:10.1007/978-3-319-07046-9_22.
  • [32] OMahony E, Hebrard E, Holland A, Nugent C, OSullivan B. Using case-based reasoning in an algorithm portfolio for constraint solving. In: Irish conference on artificial intelligence and cognitive science. 2008 pp. 210-216. URL http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.219.7318&rep=rep1&type=pdf.
  • [33] Zongker D. trueskill.py, 2014. URL Https://github.com/dougz/trueskill.
  • [34] Malitsky Y, Sellmann M. Instance-specific algorithm configuration as a method for non-model-based portfolio generation. In: Integration of AI and OR Techniques in Contraint Programming for Combinatorial Optimzation Problems, pp. 244-259. Springer, 2012. doi:10.1007/978-3-642-29828-8_16.
  • [35] UBC, 2013. Configurable SAT Solver Competition (CSSC). URL http://www.cs.ubc.ca/labs/beta/Projects/CSSC2013/.
  • [36] Brummayer R, Lonsing F, Biere A. Automated testing and debugging of SAT and QBF solvers. In: Theory and Applications of Satisfiability Testing-SAT 2010. Springer, 2010 pp. 44-57. doi:10.1007/978-3-642-14186-7_6.
  • [37] Birattari M, Yuan Z, Balaprakash P, Stützle T. F-Race and iterated F-Race: An overview. In: Experimental methods for the analysis of optimization algorithms, Springer, 2010 pp. 311-336. doi:10.1007/978-3-642-02538-9_13.
  • [38] Fitzgerald T, O’Sullivan B. Analysing the effect of candidate selection and instance ordering in a realtime algorithm configuration system. In: Proceedings of the Symposium on Applied Computing. ACM, 2017 pp. 1003-1008. doi:10.1145/3019612.3019718.
Uwagi
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-eca556b7-8031-4bc5-a942-b6d216e5a558
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.