Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  fully convolutional network
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Background: Corpus Callosum (CC) is the most prominent white matter bundle in the human brain that connects the left and right cerebral hemispheres. The present paper proposes a novel method for CC segmentation from 2D T1- weighted mid-sagittal brain MRI. The robust segmentation of CC in the mid-sagittal plane plays a vital role in the quantitative study of CC structural features related to various neurological disorders such as Autism, epilepsy, Alzheimer’s disease, and more. Methodology: In this perspective, the current work proposes a Fully Convolutional Network (FCN), a deep learning architecture-based U-Net model for automated CC segmentation from 2D brain MRI images referred to as CCsNeT. The architecture consists of a 35-layers deep, fully convolutional network with two paths, namely contracting and extracting, connected in a U-shape that automatically extracts spatial information. Results: This attempt uses the benchmark brain MRI database comprising ABIDE and OASIS for the experimental investigation. Compared to existing CC segmentation methodologies, the proposed CCsNeT presented improved results achieving Dice Coefficient = 96.74%, and Sensitivity = 97.01% with ABIDE dataset and were further validated against the variants of U-Net model U-Net++, MultiResU-Net, and CE-Net. Further, the performance of CCsNeT has been validated on OASIS and Real-Time Images dataset. Conclusion: Finally, the proposed CCsNeT extracts important CC characteristics such as CC area (CCA) and total brain area (TBA) to categorize the considered 2D MRI slices into control and autism spectrum disorder (ASD) groups, thereby minimizing the inter-observer and intra-observer variability.
2
Content available remote 2D inversion of magnetotelluric data using deep learning technology
EN
The inverse problem of magnetotelluric data is extremely difficult due to its nonlinear and ill-posed nature. The existing gradient-descent approaches for this task surface from the problems of falling into local minima and relying on reliable initial models, while statistical-based methods are computationally expensive. Inspired by the excellent nonlinear mapping ability of deep learning, in this study, we present a novel magnetotelluric inversion method based on fully convolutional networks. This approach directly builds an end-to-end mapping from apparent resistivity and phase data to resistivity anomaly model. The implementation of the proposed method contains two stages: training and testing. During the training stage, the weight sharing mechanism of fully convolutional network is considered, and only the single anomalous body model samples are used for training, which greatly shortens the modeling time and reduces the difficulty of network training. After that, the unknown combinatorial anomaly model can be reconstructed from the magnetotelluric data using the trained network. The proposed method is tested in both synthetic and field data. The results show that the deep learning-based inversion method proposed in this paper is computationally efficient and has high imaging accuracy.
EN
Differential diagnosis of malignant and benign mediastinal lymph nodes (LNs) through invasive pathological tests is a complex and painful procedure because of sophisticated anatomical locations of LNs in the chest. The image based automatic machine learning techniques have been attempted in the past for malignancy detection. But these conventional methods suffer from complex selection of hand-crafted features and trade-off between performance parameters due to them. Today deep learning approaches are out-performing conventional machine learning techniques and able to overcome these issues. However, the existing convolutional neural network (CNN) based models also are prone to overfitting because of fully connected (FC) layers. Therefore, in this paper authors have proposed a fully convolutional network (FCN) based deep learning model for lymph nodes malignancy detection in computed tomography (CT) images. Moreover, the proposed FCN has been customized with batch normalization and advanced activation function Leaky ReLU to accelerate the training and to overcome the problem of dying ReLU, respectively. The performance of the proposed FCN has been also tuned to its best for smaller data size using data augmentation methods. The generalization of the proposed model is tested using the network parameter variation. To understand the reliability of the proposed model, it has also been compared with state-of-art related deep learning networks. The proposed FCN model has achieved an average accuracy, sensitivity, specificity, and area under curve as 90.28%, 90.63%, 89.95%, and 0.90, respectively. Our results also confirms the successful usabilility of augmentation methods for working on smaller datasets and deep learning approaches.
EN
The cancer of liver, which is the leading cause of cancer death, is commonly diagnosed by comparing the changes of gray level of liver tissue in the different phases of the patient's CT images. To aid the doctor in reducing misdiagnosis or missed diagnosis, a fully automatic computer-aided diagnosis (CAD) system is proposed to diagnose hepatocellular carcinoma (HCC) using convolutional neural network (CNN) classifier. The automatic segmentation and classification are two core technologies of the proposed CAD system, which are both realized based on CNN. The segmentation of liver and tumor is implemented by a fully convolutional networks (FCN) based on a fine tuning VGG-16 model with two additional 'skip structures' using a weighted loss function which helps to solve the problem of inaccurate tumor segmentation caused by the inevitably unbalanced training data. HCC classification is implemented by a 9-layer CNN classifier, whose input is a 4-channel image data constructed by combining the segmentation result of FCN with the original CT image. A total of 165 venous phase CT images including 46 diffuse tumors, 43 nodular tumors, and 76 massive tumors are used to evaluate the performance of the proposed CAD system. The classification accuracy of CNN classifier for diffuse, nodular and massive tumors are 98.4%, 99.7% and 98.7% respectively, which are significantly improved in contrast with the traditional feature-based ANN and SVM classifiers. The proposed CAD system, which is unaffected by the difference of preprocessing method and feature type, is proved satisfactory and feasible by the test set.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.