PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Traffic Signal Settings Optimization Using Gradient Descent

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
We investigate performance of a gradient descent optimization (GR) applied to the traffic signal setting problem and compare it to genetic algorithms. We used neural networks as metamodels evaluating quality of signal settings and discovered that both optimization methods produce similar results, e.g., in both cases the accuracy of neural networks close to local optima depends on an activation function (e.g., TANH activation makes optimization process converge to different minima than ReLU activation).
Rocznik
Tom
Strony
19--30
Opis fizyczny
Bibliogr. 26 poz., rys.
Twórcy
  • TensorCell
  • TensorCell
  • Faculty of Mathematics and Computer Science, Jagiellonian University
  • TensorCell
  • Faculty of Mathematics, Informatics and Mechanics, University of Warsaw
  • TensorCell
autor
  • TensorCell
  • Faculty of Mathematics, Informatics and Mechanics, University of Warsaw
Bibliografia
  • [1] P. Gora and P. Pardel. Application of genetic algorithms and high-performance computing to the traffic signal setting problem. 24th International Workshop, CS&P 2015, Vol. 1”, ISBN: 978-83-7996-181-8, pages 146–157, 2015.
  • [2] Federal Highway Administ. TRAFFIC SIGNAL TIMING MANUAL. 2008.
  • [3] H. Prothmann. Organic Traffic Control. KIT Scientific Publishing, 2011.
  • [4] C. B. Yang and Y. J. Yeh. The model and properties of the traffic light problem. Proc. of International Conference on Algorithms, pages 19–26, 1996.
  • [5] S. Luke. Essentials of Metaheuristics. lulu.com, first edition, 2009. Available at http://cs.gmu.edu/∼sean/books/metaheuristics/.
  • [6] G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T. Liu. LightGBM: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems 30, 2017.
  • [7] Y. Jin. Surrogate-assisted evolutionary computation: Recent advances and future challenges. pages 61–70, 2011.
  • [8] D.F. Johansson, U. Shalit, and D. Sontag. Learning representations for counterfactual inference. 33rd International Conference on Machine Learning, 2016.
  • [9] P. Gora and M. Bardo´nski. Training neural networks to approximate traffic simulation outcomes. 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems, IEEE, pages 889–894, 2017.
  • [10] P. Gora, M. Brzeski, M. Mo˙zejko, A. Klemenko, and A. Kochański. Investigating performance of neural networks and gradient boosting models approximating microscopic traffic simulations in traffic optimization tasks. NIPS Workshop on Machine Learning for Intelligent Transportation Systems, 2018.
  • [11] U. Jang, W. Xi, and S. Jha. Objective metrics and gradient descent algorithms for adversarial examples in machine learning. Proceedings of the 33rd Annual Computer Security Applications Conference, pages 262–277, 2017.
  • [12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
  • [13] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Geoffrey J. Gordon, David B. Dunson, and Miroslav Dudk, editors, AISTATS, volume 15 of JMLR Proceedings, pages 315–323. JMLR.org, 2011.
  • [14] Systematic evaluation of convolution neural network advances on the imagenet. Comput. Vis. Image Underst., 161(C):11–19, August 2017.
  • [15] P. Gora. Traffic Simulation Framework - a cellular automaton based tool for simulating and investigating real city traffic. Recent Advances in Intelligent Information Systems, ISBN: 978-83-60434-59-8, pages 641–653, 2009.
  • [16] Dataset used in experiments. https://goo.gl/ytPRQg, 2018.
  • [17] K. He, X. Zhang, and S. Ren. Deep residual learning for image recognition. 2015.
  • [18] S. loffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015.
  • [19] S. Ruder. An overview of gradient descent optimization algorithms. 2016.
  • [20] R. Hahnloser, R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature. 405, pp. 947-951, 2016.
  • [21] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks.
  • [22] F. et al. Chollet. Keras.
  • [23] M. et al Abadi. Tensorflow: Large-scale machine learning on heterogeneous systems.
  • [24] Settings of genetic algorithms. https://goo.gl/G3YomX, 2018.
  • [25] Y. E. Nesterov. A method for solving the convex programming problem with convergence rate o(1/k2). Dokl. Akad. Nauk SSSR, 269:543 – 547, 1983.
  • [26] J. Ba and D. P. Kingma. Adam: A method for stochastic optimization. Proc. of ICLR, 2015.
Uwagi
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-c34ea2c3-3239-4047-8bec-b3b707b40b2a
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.