Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 9

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  transfer nauki
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Variation in powertrain parameters caused by dimensioning, manufacturing and assembly inaccuracies may prevent model-based virtual sensors from representing physical powertrains accurately. Data-driven virtual sensors employing machine learning models offer a solution for including variations in the powertrain parameters. These variations can be efficiently included in the training of the virtual sensor through simulation. The trained model can then be theoretically applied to real systems via transfer learning, allowing a data-driven virtual sensor to be trained without the notoriously labour-intensive step of gathering data from a real powertrain. This research presents a training procedure for a data-driven virtual sensor. The virtual sensor was made for a powertrain consisting of multiple shafts, couplings and gears. The training procedure generalizes the virtual sensor for a single powertrain with variations corresponding to the aforementioned inaccuracies. The training procedure includes parameter randomization and random excitation. That is, the data-driven virtual sensor was trained using data from multiple different powertrain instances, representing roughly the same powertrain. The virtual sensor trained using multiple instances of a simulated powertrain was accurate at estimating rotating speeds and torque of the loaded shaft of multiple simulated test powertrains. The estimates were computed from the rotating speeds and torque at the motor shaft of the powertrain. This research gives excellent grounds for further studies towards simulation-to-reality transfer learning, in which a virtual sensor is trained with simulated data and then applied to a real system.
EN
Brain tumors can be difficult to diagnose, as they may have similar radiographic characteristics, and a thorough examination may take a considerable amount of time. To address these challenges, we propose an intelligent system for the automatic extraction and identification of brain tumors from 2D CE MRI images. Our approach comprises two stages. In the first stage, we use an encoder-decoder based U-net with residual network as the backbone to detect different types of brain tumors, including glioma, meningioma, and pituitary tumors. Our method achieved an accuracy of 99.60%, a sensitivity of 90.20%, a specificity of 99.80%, a dice similarity coefficient of 90.11%, and a precision of 90.50% for tumor extraction. In the second stage, we employ a YOLO2 (you only look once) based transfer learning approach to classify the extracted tumors, achieving a classification accuracy of 97%. Our proposed approach outperforms state-of-the-art methods found in the literature. The results demonstrate the potential of our method to aid in the diagnosis and treatment of brain tumors.
EN
Cerebral malaria (CM) is a fatal syndrome found commonly in children less than 5 years old in Sub-saharan Africa and Asia. The retinal signs associated with CM are known as malarial retinopathy (MR), and they include highly specific retinal lesions such as whitening and hemorrhages. Detecting these lesions allows the detection of CM with high specificity. Up to 23% of CM, patients are over-diagnosed due to the presence of clinical symptoms also related to pneumonia, meningitis, or others. Therefore, patients go untreated for these pathologies, resulting in death or neurological disability. It is essential to have a low-cost and high-specificity diagnostic technique for CM detection, for which We developed a method based on transfer learning (TL). Models pre-trained with TL select the good quality retinal images, which are fed into another TL model to detect CM. This approach shows a 96% specificity with low-cost retinal cameras.
EN
The recognition of medical images with deep learning techniques can assist physicians in clinical diagnosis, but the effectiveness of recognition models relies on massive amounts of labeled data. With the rampant development of the novel coronavirus (COVID-19) worldwide, rapid COVID-19 diagnosis has become an effective measure to combat the outbreak. However, labeled COVID-19 data are scarce. Therefore, we propose a two-stage transfer learning recognition model for medical images of COVID-19 (TL-Med) based on the concept of ‘‘generic domain-target-related domain-target domain”. First, we use the Vision Transformer (ViT) pretraining model to obtain generic features from massive heterogeneous data and then learn medical features from large-scale homogeneous data. Two-stage transfer learning uses the learned primary features and the underlying information for COVID-19 image recognition to solve the problem by which data insufficiency leads to the inability of the model to learn underlying target dataset information. The experimental results obtained on a COVID-19 dataset using the TL-Med model produce a recognition accuracy of 93.24%, which shows that the proposed method is more effective in detecting COVID-19 images than other approaches and may greatly alleviate the problem of data scarcity in this field.
5
Content available remote Seismic fault detection with progressive transfer learning
EN
Fault detection of seismic data is a key step in seismic data interpretation. Many techniques have got good seismic fault detection results by supervised deep learning, which assumes that the training data and the prediction data have a similar data distribution. However, the seismic data distributions are different when the prediction data is far away from the training data set even in the same work area, which results in an irrational fault detection result. In order to solve this problem, we first propose a progressive learning framework to update the training data set, which can reduce the difference between the training data set and the prediction data. In addition, we propose a fault label correctness measure index to improve the stability of the framework. Finally, we introduce domain-adversarial neural network to reduce the impact of data distribution differences and integrate it into the progressive learning framework. We perform fault detection on actual seismic data: compared with the traditional deep learning model, our method can improve the fault continuity and obtain more fault details.
6
Content available remote CNN-based superresolution reconstruction of 3D MR images using thick-slice scans
EN
Due to inherent physical and hardware limitations, 3D MR images are often acquired in the form of orthogonal thick slices, resulting in highly anisotropic voxels. This causes the partial volume effect, which introduces blurring of image details, appearance of staircase artifacts and significantly decreases the diagnostic value of images. To restore high resolution isotropic volumes, we propose to use a convolutional neural network (CNN) driven by patches taken from three orthogonal thick-slice images. To assess the validity and efficiency of this postprocessing approach, we used 1x1x1 mm3-voxel brain images of different modalities, available via the well known BrainWeb database. They served as a high resolution reference and were numerically preprocessed to create input images of different slice thickness and anatomical orientation, for CNN training, validation and testing. The visual quality of reconstructed images was indeed superior, compared to images obtained by fusion of interpolated thick-slice images, or to images reconstructed with the CNN using a single input MR scan. The significant increase of objectively computed figures of merit, e.g. the Structural Similarity Image Metric, was also noticed. Keeping in mind that any single value of such quality metrics represents a number of psychophysical effects, we applied the CNN trained on brain images for superresolution reconstruction of synthetic and acquired blood vessel tree images. We then used the restored superresolution volumes for estimation of vessel radii. It was demonstrated that vessel radius values derived from superresolution images of simulated vessel trees are significantly more accurate than those obtained from a standard fusion of interpolated thick-slice orthogonal scans. Superiority of the CNN-based superresolution images was also observed for scanner-acquired MR scans according to the evaluated parameters. These three experiments show the efficiency of CNN-based image reconstruction for qualitative and quantitative improvement of its diagnostic quality, as well as illustrates the practical usefulness of transfer learning - networks trained on example images of one kind can be used to restore superresolution images of physically different objects.
EN
For automatic sleep stage classification, the existing methods mostly rely on hand-crafted features selected from polysomnographic records. In this paper, the goal is to develop a deep learning-based method by using single channel electroencephalogram (EEG) that automatically exploits the time–frequency spectrum of EEG signal, removing the need for manual feature extraction. The time–frequency RGB color images for EEG signal are extracted using continuous wavelet transform (CWT). The transfer learning of a pre-trained convolution neural network, squeezenet is employed to classify these CWT images into its sleep stages. The proposed method is evaluated using a publicly available Physionet sleep EDFx dataset using single-channel EEG Fpz-Cz channel. Evaluation results show that this method can achieve near state of the art accuracy even using single channel EEG signal.
EN
In recent years, deep learning and especially deep neural networks (DNN) have obtained amazing performance on a variety of problems, in particular in classification or pattern recognition. Among many kinds of DNNs, the convolutional neural networks (CNN) are most commonly used. However, due to their complexity, there are many problems related but not limited to optimizing network parameters, avoiding overfitting and ensuring good generalization abilities. Therefore, a number of methods have been proposed by the researchers to deal with these problems. In this paper, we present the results of applying different, recently developed methods to improve deep neural network training and operating. We decided to focus on the most popular CNN structures, namely on VGG based neural networks: VGG16, VGG11 and proposed by us VGG8. The tests were conducted on a real and very important problem of skin cancer detection. A publicly available dataset of skin lesions was used as a benchmark. We analyzed the influence of applying: dropout, batch normalization, model ensembling, and transfer learning. Moreover, the influence of the type of activation function was checked. In order to increase the objectivity of the results, each of the tested models was trained 6 times and their results were averaged. In addition, in order to mitigate the impact of the selection of learning, test and validation sets, k-fold validation was applied.
EN
Ultrasound imaging is widely used for breast lesion differentiation. In this paper we propose a neural transfer learning method for breast lesion classification in ultrasound. As reported in several papers, the content and the style of a particular image can be separated with a convolutional neural network. The style, coded by the Gram matrix, can be used to perform neural transfer of artistic style. In this paper we extract the neural style representations of malignant and benign breast lesions using the VGG19 neural network. Next, the Fisher discriminant analysis is used to separate those neural style representations and perform classification. The proposed approach achieves good classification performance (AUC of 0.847). Our method is compared with another transfer learning technique based on extracting pooling layer features (AUC of 0.826). Moreover, we apply the Fisher discriminant analysis to differentiate breast lesions using ultrasound images (AUC of 0.758). Additionally, we extract the eigenimages related to malignant and benign breast lesions and show that these eigenimages present features commonly associated with lesion type, such as contour attributes or shadowing. The proposed techniques may be useful for the researchers interested in ultrasound breast lesion characterization.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.