PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

Surgical phase classification and operative skill assessment through spatial context aware CNNs and time-invariant feature extracting autoencoders

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Automated surgical video analysis promises improved healthcare. We propose novel spatial context aware combined loss function for end-to-end Encoder-Decoder training for Surgical Phase Classification (SPC) on laparoscopic cholecystectomy (LC) videos. Proposed loss function leverages on fine-grained class activation maps obtained from fused multi-layer Layer-CAM for supervised learning of SPC, obtaining improved Layer-CAM explanations. Post classification, we introduce graph theory to incorporate known hierarchies of surgical phases. We report peak SPC accuracy of 96.16%, precision of 94.08% and recall of 90.02% on public dataset Cholec80, with 7 phases. Our proposed method utilizes just 73.5% of parameters as against existing state-of-the-art methodology, achieving improvement of 0.5% in accuracy, 1.76% in precision with comparable recall, with an order less standard deviation. We also propose DNN based surgical skill assessment methodology. This approach utilizes surgical phase prediction scores from the final fully-connected layer of spatial-context aware classifier to form multi-channel temporal signal of surgical phases. Time-invariant representation is obtained from this temporal signal through time- and frequency-domain analyses. Autoencoder based time-invariant features are utilized for reconstruction and identification of prominent peaks in dissimilarity curves. We devise a surgical skill measure (SSM) based on spatial-context aware temporal-prominence-of-peaks curve. SSM values are expected to be high when executed skillfully, aligning with expert assessed GOALS metric. We illustrate this trend on Cholec80 and m2cai16-tool datasets, in comparison with GOALS metric. Concurrence in the trend of SSM with respect to GOALS metric is obtained on these test videos, making it a promising step towards automated surgical skill assessment.
Twórcy
  • International Institute of Information Technology Bangalore, 26/C, Hosur Rd, Electronics City Phase 1, Electronic City, Bangalore 560100, Karnataka, India
autor
  • International Institute of Information Technology Bangalore, Electronic City, Bangalore, Karnataka, India
Bibliografia
  • [1] Jin A, Yeung S, Jopling J, et al. Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). p. 691-9.
  • [2] Garrow CR, Kowalewski KF, Li L, et al. Machine learning for surgical phase recognition: A systematic review. Ann Surgery 2021;273:684-93.
  • [3] Saeidi H, Opfermann JD, Kam M, et al. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Sci Robot 2022;7(62):eabj2908.
  • [4] Jahmunah V, En Wei Koh J, Sudarshan VK, et al. Endoscopy, video capsule endoscopy, and biopsy for automated celiac disease detection: A review. Biocybernet Biomed Eng 2023;43 (1):82-108.
  • [5] Demir KC, Schieber H, Weise T, et al. Deep Learning in Surgical Workflow Analysis: A review; 2022. URL: https:// www.techrxiv.org/articles/preprint/Surgical_Phase_ Recognition_A_Review_and_Evaluation_of_Current_ Approaches/19665717. https://doi.org/10.36227/techrxiv. 19665717.v2.
  • [6] [dataset] m2cai16 public dataset, http://camma.u-strasbg.fr/ datasets.
  • [7] [dataset] Cholec80 public dataset, http://camma.u-strasbg.fr/ datasets.
  • [8] Wagner M, Müller-Stich BP, Kisilenko A, et al. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the heichole benchmark. Med Image Anal 2023:102770.
  • [9] Schoeffmann K, Taschwer M, Sarny S, et al. Cataract-101: video dataset of 101 cataract surgeries. In: César P, Zink M, Murray N, editors. Proceedings of the 9th ACM Multimedia Systems Conference. p. 421-5.
  • [10] Al Hajj H, Lamard M, Conze PH, et al. Cataracts: Challenge on automatic tool annotation for cataract surgery. Med Image Anal 2019;52:24-41.
  • [11] Kitaguchi D, Takeshita N, Matsuzaki H, et al. Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surg Endosc 2020;34 (11):4924-31.
  • [12] Mohammad T. Brace for variability in tool positioning: Modeling and simulation of 1 dof needle insertion task under tool-braced condition. Biocybernet Biomed Eng 2015;35 (2):128-37.
  • [13] Arpaia P, Bracale U, Corcione F, et al. Assessment of blood perfusion quality in laparoscopic colorectal surgery by means of machine learning. Sci Rep 2022;12(1):14682.
  • [14] Abbing JR, Voskens FJ, Gerats BGA, et al. Towards an ai-based assessment model of surgical difficulty during early phase laparoscopic cholecystectomy. Comput Methods Biomech Biomed Eng: Imag Visual 2023:1-8.
  • [15] Zhu P, Liao W, Zhang WG, et al. A prospective study using propensity score matching to compare long-term survival outcomes after robotic-assisted, laparoscopic, or open liver resection for patients with bclc stage 0-a hepatocellular carcinoma. Ann Surg 2023;277(1).
  • [16] Bu J, Li N, He S, et al. Effect of laparoscopic surgery for colorectal cancer with n. o. s. e. on recovery and prognosis of patients. Minim Invasive Ther Allied Technol 2020;31 (2):230-7.
  • [17] Kano D, Hu C, Thornley CJ, et al. Risk factors associated with venous thromboembolism in laparoscopic surgery in nonobese patients with benign disease. Surg Endosc 2023;37 (1):592-606.
  • [18] Huang C, Liu H, Hu Y, et al. Laparoscopic vs open distal gastrectomy for locally advanced gastric cancer: Five-Year outcomes from the CLASS-01 randomized clinical trial. JAMA Surg 2022;157(1):9-17.
  • [19] Wang C, Feng H, Zhu X, et al. Comparative effectiveness of enhanced recovery after surgery program combined with Single-Incision laparoscopic surgery in colorectal cancer surgery: A retrospective analysis. Front Oncol 2022;11:768299.
  • [20] Zhou X, Shao Y, Wu C, et al. Application of a highly simulated and adaptable training system in the laparoscopic training course for surgical residents: Experience from a high-volume teaching hospital in China. Heliyon 2023;9(2):e13317.
  • [21] Haug TR, Miskovic D, Ørntoft MBW, et al. Development of a procedure-specific tool for skill assessment in left- and right-sided laparoscopic complete mesocolic excision. Colorectal Dis 2022;25(1):31-43.
  • [22] Al-Hubaishi O, Hillier T, Gillis M, et al. Video-based assessment (VBA) of an open, simulated orthopedic surgical procedure: a pilot study using a single-angle camera to assess surgical skill and decision making. J Orthop Surg Res 2023;18(1):90.
  • [23] van den Broek BLJ, Zwart MJW, Bonsing BA, et al. Video grading of pancreatic anastomoses during robotic pancreatoduodenectomy to assess both learning curve and the risk of pancreatic fistula - a post hoc analysis of the LAELAPS-3 training program. Ann Surg 2023.
  • [24] Fung ACH, Chan IHY, Wong KKY. Outcome and learning curve for laparoscopic intra-corporeal inguinal hernia repair in children. Surg Endosc 2022;37(1):434-42.
  • [25] Twinanda AP, Shehata S, Mutter D, et al. Endonet: A deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imag 2017;36(1):86-97.
  • [26] Jin Y, Dou Q, Chen H, et al. Sv-rcnet: Workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans Med Imag 2018;37(5):1114-26.
  • [27] Jin Y, Li H, Dou Q, et al. Multi-task recurrent convolutional network with correlation loss for surgical video analysis. Med Image Anal 2020;59:101572.
  • [28] Pradeep CS, Sinha N. Multi-tasking dssd architecture for laparoscopic cholecystectomy surgical assistance systems. In: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). p. 1-4.
  • [29] Jalal NA, Abdulbaki Alshirbaji T, Laufer B, et al. Analysing multi-perspective patient-related data during laparoscopic gynaecology procedures. Sci Rep 2023;13(1):1604.
  • [30] Padovan E, Marullo G, Tanzi L, et al. A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery. Int J Med Robot 2022;18(3): e2387.
  • [31] Pradeep CS, Sinha N. Spatio-temporal features based surgical phase classification using cnns. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); 2021, pp. 3332-3335.
  • [32] Czempiel T, Paschali M, Keicher M, et al. Tecno: Surgical phase recognition with multi-stage temporal convolutional networks. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. p. 343-52.
  • [33] Jin Y, Long Y, Chen C, et al. Temporal memory relation network for workflow recognition from surgical video. IEEE Trans Med Imag 2021;40(7):1911-23.
  • [34] Czempiel T, Paschali M, Ostler D, et al. Opera: Attention-regularized transformers for surgical phase recognition. In: de Bruijne M, Cattin PC, Cotin S, et al. (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2021; 2021, pp. 604-614.
  • [35] Kadkhodamohammadi A, Luengo I, Stoyanov D. Patg: position-aware temporal graph networks for surgical phase recognition on laparoscopic videos. Int J Comput Assist Radiol Surg 2022;17(5):849-56.
  • [36] Ding X, Li X. Exploring segment-level semantics for online phase recognition from surgical videos. IEEE Trans Med Imag 2022;41(11):3309-19.
  • [37] Jin Y, Long Y, Gao X, et al. Trans-SVNet: hybrid embedding aggregation transformer for surgical workflow analysis. Int J Comput Assist Radiol Surg 2022;17(12):2193-202.
  • [38] Park M, Oh S, Jeong T, Yu S. Multi-stage temporal convolutional network with moment loss and positional encoding for surgical phase recognition. Diagnostics 2022;13 (1):107.
  • [39] Jalal NA, Alshirbaji TA, Docherty PD, et al. A deep learning framework for recognising surgical phases in laparoscopic videos. IFAC-PapersOnLine 2021;54(15):334-9.
  • [40] Jalal NA, Alshirbaji TA, Docherty PD, et al. Laparoscopic video analysis using temporal, attention, and Multi-Feature fusion Based-Approaches. Sensors (Basel) 2023;23(4):1-19.
  • [41] Pan X, Gao X, Wang H, et al. Temporal-based swin transformer network for workflow recognition of surgical video. Int J Comput Assist Radiol Surg 2022;18:139-47.
  • [42] Zhang B, Goel B, Sarhan MH, et al. Surgical workflow recognition with temporal convolution and transformer for action segmentation. Int J Comput Assist Radiol Surg 2022:1-10.
  • [43] Deng J, Dong W, Socher R, et al. Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. p. 248-55.
  • [44] Howard AG, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. volume abs/1704.04861; 2017. URL: http://arxiv. org/abs/1704.04861.
  • [45] Wang RJ, Li X, Ling CX. Pelee: A real-time object detection system on mobile devices. In: Bengio S, Wallach H, Larochelle H, et al. (Eds.), Advances in Neural Information Processing Systems, volume 31; 2018.
  • [46] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). p. 770-8.
  • [47] Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). p. 2261-9.
  • [48] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). p. 1-9.
  • [49] Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17. p. 4278-84.
  • [50] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. p. 7132-41.
  • [51] Tan M, Le Q. Efficientnetv2: Smaller models and faster training. In: Meila M, Zhang T, editors. Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research. p. 10096-106.
  • [52] Smith LN. Cyclical learning rates for training neural networks. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). p. 464-72.
  • [53] Li J, Yang X, Chu G, et al. Application of improved robot-assisted laparoscopic telesurgery with 5G technology in urology. Eur Urol 2022;83(1):41-4.
  • [54] Zhuang F, Qi Z, Duan K, et al. A comprehensive survey on transfer learning. Proc IEEE 2021;109(1):43-76.
  • [55] Mendes A, Togelius J, Coelho LdS. Multi-stage transfer learning with an application to selection process. In: European Conference on Artificial Intelligence. Springer; 2020. p. 1295-302.
  • [56] Zhou B, Khosla A, Lapedriza A, et al. Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). p. 2921-9.
  • [57] Selvaraju RR, Cogswell M, Das A, et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV). p. 618-26.
  • [58] Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). p. 839-47.
  • [59] Wang H, Wang Z, Du M, et al. Score-cam: Score-weighted visual explanations for convolutional neural networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). p. 111-9.
  • [60] Jiang PT, Zhang CB, Hou Q, et al. Layercam: Exploring hierarchical class activation maps for localization. IEEE Trans Image Process 2021;30:5875-88.
  • [61] Tan M, EfficientNet Le Q. Rethinking model scaling for convolutional neural networks. In: Chaudhuri K, Salakhutdinov R, editors. Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research. p. 6105-14.
  • [62] Lam K, Chen J, Wang Z, et al. Machine learning for technical skill assessment in surgery: a systematic review. npj Digital Med 2022;5(1):24.
  • [63] Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016.
  • [64] Pradeep CS, Sinha N. Fused multilayer layer-cam finegrained spatial feature supervision for surgical phase classification using cnns. In: Karlinsky L, Michaeli T, Nishino K, editors. Computer Vision – ECCV 2022 Workshops; 2023, pp. 712-726.
  • [65] Paszke A, Chaurasia A, Kim S, Culurciello E. Enet: A deep neural network architecture for real-time semantic segmentation. In: European Conference on Computer Vision. Springer; 2016. p. 321-37.
  • [66] De Ryck T, De Vos M, Bertrand A. Change point detection in time series data using autoencoders with a time-invariant representation. IEEE Trans Signal Process 2021;69:3513-24.
  • [67] Lee W, Ortiz J, Ko B, Lee RB. Time series segmentation through automatic feature learning. volume abs/1801.05394; 2018. URL: http://arxiv.org/abs/1801.05394.
  • [68] Naitzat G, Zhitnikov A, Lim LH. Topology of deep neural networks. J Mach Learn Res 2022;21(1):7503-42.
  • [69] Muller SG, Trivialaugment Hutter F. Tuning-free yet state-of-the-art data augmentation. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV). p. 754-62.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-5376a95e-4c50-45ab-8c5d-7f08fc892593
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.