Objectives: This study aims to develop an advanced and efficient deep learning-based approach for the detection and segmentation of cell nuclei in microscopic images. By exploiting the U-Net architecture, this research helps to overcome the limitations of traditionally followed computational methods, enhancing the precision and scalability of biomedical image analysis. Methods: This research utilizes a deep learning model based on the U-Net architecture and is trained and evaluated for cell nuclei segmentation. The model was optimized by fine-tuning parameters, i.e., applying data augmentation techniques and employing performance metrics such as Intersection over Union (IoU) for evaluation. Comparisons were made with traditional segmentation techniques to assess improvements in accuracy, efficiency, and robustness. Results: This U-Net model demonstrated superior performance in segmenting cell nuclei compared to conventional methods. The results showed increased segmentation accuracy, lowering manual efforts, and enhanced reproducibility across different imaging datasets. The model's high IoU values confirmed its effectiveness in accurately identifying cell nuclei boundaries, making it a reliable tool for automated biomedical image analysis. Conclusions: The study highlights the effectiveness of the U-Net architecture in automated cell nuclei detection and segmentation, addressing challenges associated with manual analysis. Its scalability and adaptability extend its applicability beyond cell nuclei segmentation to other biomedical imaging tasks, offering significant potential for disease diagnosis, therapeutic development, and clinical decision-making. The findings reinforce the transformative impact of deep learning in biomedical research and healthcare applications.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Objectives: In order to increase diagnostic precision and efficiency in clinical settings, the goal is to assess how well sophisticated convolutional neural networks (CNNs) perform automated cardiac area recognition from chest X-ray pictures. Methods: 496 high-resolution DICOM chest X-ray images (1024 x 1024) had been used as the dataset. Images were preprocessed, which included augmentation (e.g., scaling, rotation, contrast correction), normalization, and resizing. Metrics including Mean Squared Error (MSE) and Intersection over Union (IoU) were used to train and compare many CNN architectures (AlexNet, GoogLeNet, VGG-16, ResNet-18, and ResNet-50). The Adam optimizer was used in the training phase, with a batch size of 32 and 100 epochs. Validation was done on 96 images, and performance was measured with IoU scores and bounding box prediction accuracy. Results: ResNet-50 outperformed the other models, with 93.2% accuracy and a mean IoU of 0.84 with very little variability. In terms of localization accuracy and training stability, the model outperformed alternative designs and demonstrated strong bounding box prediction abilities. The reliability of ResNet-50 in pinpointing specific cardiac regions under various imaging conditions is demonstrated by these results. Conclusions: The study concludes by highlighting the revolutionary potential of deep learning in automating the detection of cardiac regions in chest X-rays. The best model turned out to be ResNet-50, which presented a big stride in incorporating AI-based solutions into diagnostic processes, especially in environments with limited resources. Combining detection and segmentation for improved diagnostic insights should be investigated in future studies.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.