PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Vision-based positioning of electric buses for assisted docking to charging stations

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
We present a novel approach to vision-based localization of electric city buses for assisted docking to a charging station. The method assumes that the charging station is a known object, and employs a monocular camera system for positioning upon carefully selected point features detected on the charging station. While the pose is estimated using a geometric method and taking advantage of the known structure of the feature points, the detection of keypoints themselves and the initial recognition of the charging station are accomplished using neural network models. We propose two novel neural network architectures for the estimation of keypoints. Extensive experiments presented in the paper made it possible to select the MRHKN architecture as the one that outperforms state-of-the-art keypoint detectors in the task considered, and offers the best performance with respect to the estimated translation and rotation of the bus with a low-cost hardware setup and minimal passive markers on the charging station.
Rocznik
Strony
583--599
Opis fizyczny
Bibliogr. 47 poz., rys., tab., wykr.
Twórcy
autor
  • Institute of Robotics and Machine Intelligence, Poznan University of Technology, Piotrowo 3A, 60-965 Poznan, Poland
  • Institute of Robotics and Machine Intelligence, Poznan University of Technology, Piotrowo 3A, 60-965 Poznan, Poland
  • Institute of Robotics and Machine Intelligence, Poznan University of Technology, Piotrowo 3A, 60-965 Poznan, Poland
Bibliografia
  • [1] Andriluka, M., Pishchulin, L., Gehler, P. and Schiele, B. (2014). 2D human pose estimation: New benchmark and state of the art analysis, IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, pp. 3686–3693.
  • [2] Clarembaux, L.G., P´erez, J., Gonzalez, D. and Nashashibi, F. (2016). Perception and control strategies for autonomous docking for electric freight vehicles, Transportation Research Procedia 14: 1516–1522.
  • [3] Dreossi, T., Ghosh, S., Yue, X., Keutzer, K., Sangiovanni-Vincentelli, A. and Seshia, S.A. (2018). Counterexample-guided data augmentation, Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pp. 2071–2078.
  • [4] Fan, Y. and Zhang, W. (2015). Traffic sign detection and classification for advanced driver assistant systems, International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, pp. 1335–1339.
  • [5] Gawron, T., Mydlarz, M. and Michalek, M.M. (2019). Algorithmization of constrained monotonic maneuvers for an advanced driver assistant system in the intelligent urban buses, IEEE Intelligent Vehicles Symposium, Paris, France, pp. 232–238.
  • [6] Geiger, A., Lenz, P. and Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite, Conference on Computer Vision and Pattern Recognition, Rhode Island, USA, pp. 3354–3361.
  • [7] Girshick, R., Donahue, J., Darrell, T. and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation, IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, pp. 580–587.
  • [8] Hartley, R.I. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge.
  • [9] He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, pp. 770–778.
  • [10] Kendall, A., Grimes, M. and Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-DOF camera relocalization, IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp. 2938–2946.
  • [11] Kim, J., Cho, H., Hwangbo, M., Choi, J., Canny, J. and Kwon, Y.P. (2018). Deep traffic light detection for self-driving cars from a large-scale dataset, International Conference on Intelligent Transportation Systems (ITSC), Maui, USA, pp. 280–285.
  • [12] Kukkala, V.K., Tunnell, J., Pasricha, S. and Bradley, T. (2018). Advanced driver-assistance systems: A path toward autonomous vehicles, IEEE Consumer Electronics Magazine 7(5): 18–25.
  • [13] Lepetit, V., Moreno-Noguer, F. and Fua, P. (2009). EPnP: An accurate o(n) solution to the PNP problem, International Journal of Computer Vision 81(2): 155–166.
  • [14] Lim, K.L. and Bräunl, T. (2020). A review of visual odometry methods and its applications for autonomous driving, arXivabs/2009.09193.
  • [15] Liu, J.-J., Hou, Q., Cheng, M.-M.,Wang, C. and Feng, J. (2020). Improving convolutional networks with self-calibrated convolutions, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10093–10102, (online).
  • [16] Lu, X. X. (2018). A review of solutions for perspective-n-point problem in camera pose estimation, Journal of Physics: Conference Series 1087(5): 052009.
  • [17] Luo, R.C., Liao, C.T., Su, K.L. and Lin, K.C. (2005). Automatic docking and recharging system for autonomous security robot, IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Canada, pp. 2953–2958.
  • [18] Marchand, E., Spindler, F. and Chaumette, F. (2005). ViSP for visual servoing: A generic software platform with a wide class of robot control skills, IEEE Robotics and Automation Magazine 12(4): 40–52.
  • [19] Michałek, M. and Kiełczewski, M. (2015). The concept of passive control assistance for docking maneuvers with n-trailer vehicles, IEEE/ASME Transactions on Mechatronics 20(5): 2075–2084.
  • [20] Michałek, M.M., Gawron, T., Nowicki, M. and Skrzypczyński, P. (2021). Precise docking at charging stations for large-capacity vehicles: An advanced driver-assistance system for drivers of electric urban buses, IEEE Vehicular Technology Magazine 16(3): 57–65.
  • [21] Michałek, M.M., Patkowski, B. and Gawron, T. (2020). Modular kinematic modelling of articulated buses, IEEE Transactions on Vehicular Technology 69(8): 8381–8394.
  • [22] Miseikis, J., Rüther, M., Walzel, B., Hirz, M. and Brunner, H. (2017). 3D vision guided robotic charging station for electric and plug-in hybrid vehicles, arXiv abs/1703.05381.
  • [23] MMPose (2020). OpenMMLab pose estimation toolbox and benchmark, https://github.com/open-mmlab/mmpose.
  • [24] Mur-Artal, R. and Tardós, J.D. (2017). ORB-SLAM2: An open-source SLAM system for monocular, stereo and RGB-D cameras, IEEE Transactions on Robotics 33(5): 1255–1262.
  • [25] Nowak, T., Nowicki, M., Ćwian, K. and Skrzypczyński, P. (2019). How to improve object detection in a driver assistance system applying explainable deep learning, IEEE Intelligent Vehicles Symposium, Paris, France, pp. 226–231.
  • [26] Nowak, T., Nowicki, M., Ćwian, K. and Skrzypczyński, P. (2020). Leveraging object recognition in reliable vehicle localization from monocular images, in C. Zieliński et al. (Eds), Automation 2020: Towards Industry of the Future, Springer, Cham, pp. 195–205.
  • [27] Olson, C. and Abi-Rached, H. (2010). Wide-baseline stereo vision for terrain mapping, Machine Vision and Applications 21(5): 713–725.
  • [28] Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C. and Murphy, K. (2017). Towards accurate multi-person pose estimation in the wild, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, pp. 3711–3719.
  • [29] Pérez, J., Nashashibi, F., Lefaudeux, B., Resende, P. and Pollard, E. (2013). Autonomous docking based on infrared system for electric vehicle charging in urban areas, Sensors 13(2): 2645–2663.
  • [30] Petrov, P., Boussard, C., Ammoun, S. and Nashashibi, F. (2012). A hybrid control for automatic docking of electric vehicles for recharging, IEEE International Conference on Robotics and Automation, St. Paul, USA, pp. 2966–2971.
  • [31] Rahmat, R., Dennis, D., Sitompul, O., Sarah, P. and Budiarto, R. (2019). Advertisement billboard detection and geotagging system with inductive transfer learning in deep convolutional neural network, TELKOMNIKA (Telecommunication Computing Electronics and Control) 17(5): 2659.
  • [32] Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016). You only look once: Unified, real-time object detection, IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 779–788.
  • [33] Ren, S., He, K., Girshick, R. and Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks, Advances in Neural Information Processing Systems, Montreal, Canada, pp. 91–99.
  • [34] Royer, E., Lhuillier, M., Dhome, M. and Chateau, T. (2005). Localization in urban environments: Monocular vision compared to a differential GPS sensor, IEEE Conference on Computer Vision and Pattern Recognition, San Diego, USA, Vol. 2, pp. 114–121.
  • [35] Schubert, E., Sander, J., Ester, M., Kriegel, H.P. and Xu, X. (2017). DBSCAN revisited: Why and how you should (still) use DBSCAN, ACM Transactions on Database Systems 42(3): 1–21.
  • [36] Schunk Carbon Technology (2021). Schunk smart charging, https://www.schunk-carbontechnology.com/en/smart-charging.
  • [37] Skrzypczyński, P. (2009). Simultaneous localization and mapping: A feature-based probabilistic approach, International Journal of Applied Mathematics and Computer Science 19(4): 575–588, DOI: 10.2478/v10006-009-0045-z.
  • [38] Taghibakhshi, A., Ogden, N. and West, M. (2021). Local navigation and docking of an autonomous robot mower using reinforcement learning and computer vision, 2021 13th International Conference on Computer and Automation Engineering (ICCAE), Bruxelles, Belgium, pp. 10–14.
  • [39] Toshpulatov, M., Lee, W., Lee, S. and Haghighian Roudsari, A. (2022). Human pose, hand and mesh estimation using deep learning: A survey, The Journal of Supercomputing 78(6): 7616–7654.
  • [40] Triggs, B., McLauchlan, P.F., Hartley, R.I. and Fitzgibbon, A.W. (2000). Bundle adjustment—A modern synthesis, in B. Triggs et al. (Eds), Vision Algorithms: Theory and Practice, Springer, Berlin, pp. 298–372.
  • [41] u-blox (2020). ZED-F9P: u-blox F9 high precision GNSS module, https://content.u-blox.com/sites/default/files/ZED-F9P-04B_DataSheet_UBX-21044850.pdf.
  • [42] Vivacqua, R., Vassallo, R. and Martins, F. (2017). A low cost sensors approach for accurate vehicle localization and autonomous driving application, Sensors 17(10), Article no. 2359.
  • [43] Wang, J. and Olson, E. (2016). AprilTag 2: Efficient and robust fiducial detection, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, pp. 4193–4198.
  • [44] Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., Liu, W. and Xiao, B. (2021). Deep high-resolution representation learning for visual recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 43(10): 3349–3364.
  • [45] Xiang, Y., Schmidt, T., Narayanan, V. and Fox, D. (2018). PoseCNN: A convolutional neural network for 6D object pose estimation in cluttered scenes, Proceedings of Robotics: Science and Systems, Pittsburgh, USA.
  • [46] Youjing, C. and Shuzhi, S.G. (2003). Autonomous vehicle positioning with GPS in urban canyon environments, IEEE Transactions on Robotics and Automation 19(1): 15–25.
  • [47] Zhang, W., Fu, C. and Zhu, M. (2020). Joint object contour points and semantics for instance segmentation, arXiv abs/2008.00460.
Uwagi
PL
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023)
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-9bb4061a-b571-449d-8c8f-34376ed82027
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.