Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The field of ophthalmic surgery demands accurate identification of specialized surgical instruments. Manual recognition can be time-consuming and prone to errors. In recent years neural networks have emerged as promising techniques for automating the classification process. However, the deployment of these advanced algorithms requires the collection of large amounts of data and a painstaking process of tagging selected elements. This paper presents a novel investigation into the application of neural networks for the detection and classification of surgical instruments in ophthalmic surgery. The main focus of the research is the application of active learning techniques, in which the model is trained by selecting the most informative instances to expand the training set. Various active learning methods are compared, with a focus on their effectiveness in reducing the need for significant data annotation – a major concern in the field of surgery. The use of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to achieve high performance in the task of surgical tool detection is outlined. The combination of artificial intelligence (AI), machine learning, and Active Learning approaches, specifically in the field of ophthalmic surgery, opens new perspectives for improved diagnosis and surgical planning, ultimately leading to an improvement in patient safety and treatment outcomes.
Rocznik
Tom
Strony
art. no. e150337
Opis fizyczny
Bibliogr. 27 poz., rys., tab.
Twórcy
autor
- Institute of Automatic Control and Robotics, Warsaw University of Technology, A. Boboli 8, 02-525 Warsaw, Poland
autor
- Institute of Automatic Control and Robotics, Warsaw University of Technology, A. Boboli 8, 02-525 Warsaw, Poland
autor
- Institute of Automatic Control and Robotics, Warsaw University of Technology, A. Boboli 8, 02-525 Warsaw, Poland
autor
- Institute of Automatic Control and Robotics, Warsaw University of Technology, A. Boboli 8, 02-525 Warsaw, Poland
autor
- International Centre for Translational Eye Research, Skierniewicka 10A, 01-230 Warsaw, Poland
- Institute of Physical Chemistry, Polish Academy of Sciences, Kasprzaka 44/52, 01-224 Warsaw, Poland
Bibliografia
- [1] D. Zhou et al., “Eye explorer: A robotic endoscope holder for eye surgery,” Int. J. Med. Robot, vol. 17, p. e2177, 2020, doi: 10.1002/rcs.2177.
- [2] B.C. Becker and C.N. Riviere, “Real-time retinal vessel mapping and localization for intraocular surgery,” in 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 5360–5365, doi: 10.1109/ICRA.2013.6631345.
- [3] M. Rosenfield and N. Logan, Optometry: Science, Techniques and Clinical Management. Elsevier Ltd, 2009.
- [4] G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Software Tools, vol. 25, pp. 120–125, 2000.
- [5] M. Alsheakhali, M. Yigitsoy, A. Eslami, and N. Navab, “Surgical tool detection and tracking in retinal microsurgery,” in Proceedings of SPIE Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling, 2015, pp. 1–4, doi: 10.1117/12.2082335.
- [6] K. Gromada, B. Piotrowski, P. Ciąćka, A. Kurek, and A. Curatolo, “Improved tool tracking algorithm for eye surgery based on combined color space masks,” in Proceedings of SPIE Medical Imaging 2023: Image-Guided Procedures, Robotic Interventions, and Modeling, San Diego, California, United States, 2023, p. 124660G, doi: 10.1117/12.2654602.
- [7] C. Lin, Y. Zheng, C. Guang, K. Ma, and Y. Yang, “Precision forceps tracking and localisation using a kalman filter for continuous curvilinear capsulorhexis,” Robot. Comput. Surg., vol. 18, p. e2432, 2022, doi: 10.1002/rcs.2432.
- [8] G. Luijten et al., `3d surgical instrument collection for computer vision and extended reality,” Sci Data, vol. 10, p. 796, 2023, doi: 10.1038/s41597-023-02684-0.
- [9] M. Allan, S. Ourselin, S. Thompson, D. Hawkes, J. Kelly, and D. Stoyanov, “Toward detection and localization of instruments in minimally invasive surgery,” IEEE Trans. Biomed. Eng., vol. 60, pp. 1050–1058, 2013.
- [10] D. Bouget, R. Benenson, M. Omran, L. Riffaud, B. Schiele, and P. Jannin, `Detecting surgical tools by modelling local appearance and global shape,” IEEE Trans. Med. Imag., vol. 34, no. 12, pp. 2603–2617, 2015, doi: 10.1109/TMI.2015.2450831.
- [11] D. Bouget, M. Allan, D. Stoyanov, and P. Jannin, “Vision-based and marker-less surgical tool detection and tracking: a review of the literature,” Med. Image Anal., vol. 35, pp. 633–654, 2017, doi: 10.1016/j.media.2016.09.003.
- [12] J. Zhou and S. Payandeh, “Visual tracking of laparoscopic instruments,” J. Autom. Cont. Eng., vol. 2, no. 3, pp. 234–241, 2014.
- [13] R. Sznitman, R. Richa, R.H. Taylor, B. Jedynak, and G.D. Hager, “Unified detection and tracking of instruments during retinal microsurgery,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 5, pp. 1263–1273, 2013, doi: 10.1109/TPAMI.2012.209.
- [14] N. Rieke et al., “Real-time localization of articulated surgical instruments in retinal microsurgery,” Med. Image Anal., vol. 34, pp. 82–100, 2016, doi: 10.1016/j.media.2016.05.003.
- [15] N. Rieke et al., “Real-time online adaption for robust instrument tracking and pose estimation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, 2016, pp. 422–430.
- [16] X. Yang, Y. Zhang, and D. Zhou, “Deep networks for image super-resolution using hierarchical features,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 70, no. 1, p. e139616, 2022, doi: 10.24425/bpasts.2021.139616.
- [17] B.Y. Suprapto, K.M.A. Kurniawan, M.K. Ardela, H. Hikmarika, Z. Husin, and S. Dwijayanti, “Identification of garbage in the river based on the yolo algorithm,” Int. J. Electron. Telecommun., vol. 67, no. 4, pp. 727–733, 2021, doi: 10.24425/ijet.2021.137869.
- [18] T. Mahendrakar, A. Ekblad, N. Fischer, R. White, M. Wilde, B. Kish, and I. Silver, “Performance study of yolov5 and faster r-cnn for autonomous navigation around non-cooperative targets,” in 2022 IEEE Aerospace Conference (AERO), 2022, pp. 1–12, doi: 10.1109/AERO53065.2022.9843537.
- [19] B. Settles, Active Learning. Morgan Claypool Publishers, 2012.
- [20] F. Marquardt, “Lecture 26: Active learning for network training: Uncertainty sampling and other approaches.” https://www.youtube.com/watch?v=fwHZtqr-uBY, access 01.03.2023.
- [21] B. Zhang, L. Li, S. Yang, S. Wang, Z.-J. Zha, and Q. Huang, “State-relabeling adversarial active learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8756–8765.
- [22] O.M. Cliff, M. Prokopenko, and R. Fitch, “Minimising the kullback–leibler divergence for model selection in distributed nonlinear systems,” Entropy, vol. 20, no. 2, p. 51, 2018, doi: 10.3390/e20020051.
- [23] L. Wang, X. Hu, B. Yuan, and J. Lu, “Active learning via query synthesis and nearest neighbour search,” Neurocomputing, vol. 147, pp. 426–434, 2015, doi: 10.1016/j.neucom.2014.06.042.
- [24] K. Lang and E. Baum, “Query learning can work poorly when a human oracle is used.” IEEE Press, pp. 335–340, 1992, https://www.academia.edu/6168656/Query_learning_can_work_poorly_when_a_human_oracle_is_used, access 01.03.2023.
- [25] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. Cham: Springer International Publishing, 2014, pp. 346–361.
- [26] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distanceiou loss: Faster and better learning for bounding box regression,” 2019. [Online]. Available: https://arxiv.org/abs/1911.08287
- [27] O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised Learning. The MIT Press, 2006, doi: 10.7551/mitpress/9780262033589.001.0001.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-f3f344f5-8699-4c8e-a69e-0b3fb4dce0b1