PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

An accurate and stable pose estimation method for planar cases considering the line constraints between every two points

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The current solutions for pose estimation problems using coplanar feature points (PnP problems) can be divided into non-iterative and iterative solutions. The accuracy, stability, and efficiency of iterative methods are unsatisfactory. Therefore, non-iterative methods have become more popular. However, the non-iterative methods only consider the correspondence of the feature points with their 2D projections. They ignore the constraints formed between feature points. This results in lower pose estimation accuracy and stability. In this work, we proposed an accurate and stable pose estimation method considering the line constraints between every two feature points. Our method has two steps. In the first step, we solved the pose non-iteratively, considering the correspondence of the 3D feature points with their 2D projections and the line constraints formed by every two feature points. In the second step, the pose was refined by minimizing the re-projection errors with one iteration, further improving accuracy and stability. Simulation and actual experiment results show that our method’s accuracy, stability, and computational efficiency are better than the other existing pose estimation methods. In the -45° to +45° measuring range, the maximum angle measurement error is no more than 0.039°, and the average angle measurement error is no more than 0.016°. In the 0 mm to, 30 mm measuring range, the maximum displacement measurement error is no more than 0.049 mm, and the average displacement measurement error is no more than 0.012 mm. Compared to other current pose estimation methods, our method is the most efficient based on guaranteeing measurement accuracy and stability.
Słowa kluczowe
Rocznik
Strony
235--258
Opis fizyczny
Bibliogr. 33 poz., fot., rys., tab., wykr., wzory
Twórcy
autor
  • School of Mechanical Engineering, Tianjin University of Technology and Education, Tianjin, China
autor
  • School of Mechanical Engineering, Tianjin University of Technology and Education, Tianjin, China
autor
  • State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin, China
autor
  • School of Mechanical Engineering, Tianjin University of Technology and Education, Tianjin, China
Bibliografia
  • [1] Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395. https://doi.org/10.1145/358669.358692
  • [2] Dong, Y., Zhang, G., Chang, S., Zhang, Z., & Li, Y. (2021). A pose measurement algorithm of space target based on monocular vision and accuracy analysis. Acta Photonica Sinica, 50(8), 1112003. https://doi.org/10.3788/gzxb20215011.1112003
  • [3] Zhu, Z., Wang, S., Zhang, H., & Zhang, F. (2020). Camera-projector system calibration method based on optimal polarization angle. Optical Engineering, 59(3), 035104-035104. https://doi.org/10.1117/1.OE.59.3.035104
  • [4] Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 1330-1334. https://doi.org/10.1109/34.888718
  • [5] Cui, H., Sun, R., Fang, Z., Lou, H., Tian, W., & Liao, W. (2020). A novel flexible two-step method for eye-to-hand calibration for robot assembly system. Measurement and Control, 53(9-10), 2020-2029. https://doi.org/10.1177/0020294020964842
  • [6] Yuan, K., Guo, Z., & Wang, Z. J. (2020). RGGNet: Tolerance aware LiDAR-camera online calibration with geometric deep learning and generative model. IEEE Robotics and Automation Letters, 5(4), 6956-6963. https://doi.org/10.1109/LRA.2020.3026958
  • [7] Enayati, N., De Momi, E., & Ferrigno, G. (2015). A quaternion-based unscented Kalman filter for robust optical/inertial motion tracking in computer-assisted surgery. IEEE Transactions on Instrumentation and Measurement, 64(8), 2291-2301. https://doi.org/10.1109/TIM.2015.2390832
  • [8] Fu, B., Han, F., Wang, Y., Jiao, Y., Ding, X., Tan, Q., ... & Xiong, R. (2021). High-precision multicamera-assisted camera-IMU calibration: Theory and method. IEEE Transactions on Instrumentation and Measurement, 70, 1-17. https://doi.org/10.1109/TIM.2021.3051726
  • [9] Guo, X., Tang, J., Li, J., Shen, C., & Liu, J. (2019). Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions, 95, 379-391. https://doi.org/10.1016/j.isatra.2019.05.009
  • [10] Zhang, Z., Liu, B., & Jiang, Y. (2015). A two-step pose estimation method based on four non-coplanar points. Optik, 126(17), 1520-1526. https://doi.org/10.1016/j.ijleo.2015.04.039
  • [11] DeMenthon, D. F., & Davis, L. S. (1995). Model-based object pose in 25 lines of code. International Journal of Computer Vision, 15, 123-141. https://doi.org/10.1007/BF01450852
  • [12] David, P., Dementhon, D., Duraiswami, R., & Samet, H. (2004). SoftPOSIT: Simultaneous pose and correspondence determination. International Journal of Computer Vision, 59, 259-284. https://doi.org/10.1023/B:VISI.0000025800.10423.1f
  • [13] Oberkampf, D., DeMenthon, D. F., & Davis, L. S. (1996). Iterative pose estimation using coplanar feature points. Computer Vision and Image Understanding, 63(2), 495-511. https://doi.org/10.1006/cviu.1996.0037
  • [14] Lu, C. P., Hager, G. D., & Mjolsness, E. (2000). Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(6), 610-622. https://doi.org/10.1109/34.862199
  • [15] Schweighofer, G., & Pinz, A. (2006). Robust pose estimation from a planar target. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9), 2024-2030. https://doi.org/10.1109/TPAMI.2006.252
  • [16] Dong, H., Sun, C., Zhang, B., & Wang, P. (2019). Simultaneous pose and correspondence determination combining softassign and orthogonal iteration. IEEE Access, 7, 137720-137730. https://doi.org/10.1109/ACCESS.2019.2939020
  • [17] Sun, C., Dong, H., Zhang, B., & Wang, P. (2018). An orthogonal iteration pose estimation algorithm based on an incident ray tracking model. Measurement Science and Technology, 29(6), 095402. https://doi.org/10.1088/1361-6501/aad014
  • [18] Wu, P. C., Tseng, H. Y., Yang, M. H., & Chien, S. Y. (2018). Direct pose estimation for planar objects. Computer Vision and Image Understanding, 172, 50-66. https://doi.org/10.1016/j.cviu.2018.03.006
  • [19] Lepetit, V., Moreno-Noguer, F., & Fua, P. (2009). EP n P: An accurate O (n) solution to the PnP problem. International Journal of Computer Vision, 81, 155-166. https://doi.org/10.1007/s11263-008-0152-6
  • [20] Hesch, J. A., & Roumeliotis, S. I. (2011, November). A direct least-squares (DLS) method for PnP. In 2011 International Conference on Computer Vision (pp. 383-390). IEEE. https://doi.org/10.1109/ICCV.2011.6126266
  • [21] Li, S., Xu, C., & Xie, M. (2012). A robust O(n) solution to the perspective-n-point problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7), 1444-1450. https://doi.org/10.1109/TPAMI.2012.41
  • [22] Wang, P., Xu, G., Cheng, Y., & Yu, Q. (2018). A simple, robust and fast method for the perspective-n-point problem. Pattern Recognition Letters, 108, 31-37. https://doi.org/10.1016/j.patrec.2018.02.028V
  • [23] Zheng, Y., Sugimoto, S., & Okutomi, M. (2013). ASPnP: An accurate and scalable solution to the perspective-n-point problem. IEICE Transactions on Information and Systems, 96(7), 1525-1535. https://doi.org/10.1587/transinf.E96.D.1525
  • [24] Zheng, Y., Kuang, Y., Sugimoto, S., Astrom, K., & Okutomi, M. (2013). Revisiting the PnP problem: A fast, general and optimal solution. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2344-2351). https://doi.org/10.1109/ICCV.2013.291
  • [25] Kneip, L., Li, H., & Seo, Y. (2014). UPnP: An optimal O(n) solution to the absolute pose problem with universal applicability. In Computer Vision - ECCV 2014: Computer Vision - ECCV 2014. Lecture Notes in Computer Science (pp. 127-142). Springer International Publishing. https://doi.org/10.1007/978-3-319-10590-1_9
  • [26] Liu, Y., Lei, B., Fan, B., Bian, J. (2020). Target positioning technology and its structural parameter optimization based on vision measurement. Infrared and Laser Engineering, 49(S02).
  • [27] Liu, X., Liu, Z., Duan, G., Cheng, J., Jiang, X., & Tan, J. (2018). Precise and robust binocular camera calibration based on multiple constraints. Applied Optics, 57(18), 5130-5140. https://doi.org/10.1364/AO.57.005130
  • [28] Yang, P., Yin, Y., Lu, R., & Zhu, H. (2022). Binocular camera calibration based on directional target and multi-constraint optimization. Acta Optica Sinica, 42(8), 0815002. https://doi.org/10.3788/AOS202242.0815002 (in Chinese)
  • [29] Zhang Z., Xu K., Wu Y., Zhang S., et al., A simple and precise calibration method for binocular vision. Measurement Science and Technology, 2022; 33(6): 065016. https://doi.org/10.1088/1361-6501/ac4ce5
  • [30] Zimiao, Z., Kai, X., Yanan, W., Shihai, Z., & Yang, Q. (2022). A simple and precise calibration method for binocular vision. Measurement Science and Technology, 33(6), 065016. https://doi.org/10.1080/09500340.2017.1397218
  • [31] Huo, J., Cui, J. S., & Wang, W. X. (2014). Error analysis of monocular visual position measurement based on coplanar feature points. Acta Photonica Sinica, 43(5), 144-150. https://doi.org/10.3788/gzxb20144305.0512003
  • [32] Qu, Y., & Hou, W. (2019). Attitude accuracy analysis of PnP based on error propagation theory. Optics and Precision Engineering, 27(2), 479-487. https://doi.org/10.3788/OPE.20192702.0479 (in Chinese)
  • [33] Qu, Y., Liu, J., & Hou, W., (2020). Graphics Design of Cooperative Targets on Monocular Vision High Precision Measurement. Acta Optica Sinica, 40(10). https://doi.org/10.3788/AOS202040.1315001 (in Chinese)
Uwagi
1. This research was supported by the Tianjin University Science and Technology Development Fund Project (2022ZD029).
2. Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-e5e9c2a5-4dd6-4083-92dc-ebc685275303
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.