Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Breast cancer causes a huge number of women’s deaths every year. The accurate localization of a breast lesion is a crucial stage. The segmentation of breast ultrasound images participates in the improvement of the process of detection of breast anomalies. An automatic approach of segmentation of breast ultrasound images is presented in this paper, the proposed model is a modified u-net called Attention Residual U-net, designed to help radiologists in their clinical examination to determine adequately the limitation of breast tumors. Attention Residual U-net is a combination of existing models (Convolutional Neural Network U-net, the Attention Gate Mechanism and the Residual Neural Network). Public breast ultrasound images dataset of Baheya hospital in Egypt is used in this work. Dice coefficient, Jaccard index and Accuracy are used to evaluate the performance of the proposed model on the test set. Attention residual u-net can significantly give a dice coefficient = 90%, Jaccard index = 76% and Accuracy = 90%. The proposed model is compared with two other breast segmentation methods on the same dataset. The results show that the modified U-net model was able to achieve accurate segmentation of breast lesions in breast ultrasound images.
PL
Każdego roku rak piersi powoduje ogromną liczbę zgonów kobiet. Dokładna lokalizacja zmiany piersi jest kluczowym etapem. Segmentacja obrazów ultrasonograficznych piersi przyczynia się do poprawy procesu wykrywania nieprawidłowości piersi. W tym artykule przedstawiono automatyczne podejście do segmentacji obrazów ultrasonograficznych piersi, proponowany model to zmodyfikowany U-net, nazwany Attention Residual U-net, zaprojektowany w celu wspomagania radiologów podczas badania klinicznego, w celu odpowiedniego określenia zasięgu guzów piersiowych. Attention Residual U-net jest połączeniem istniejących modeli (konwolucyjną siecią neuronową U-net, Attention Gate Mechanism i Residual Neural Network). W tym badaniu wykorzystano publiczny zbiór danych obrazów ultrasonograficznych piersi szpitala Baheya w Egipcie. Do oceny wydajności zaproponowanego modelu na zbiorze testowym wykorzystano współczynnik Dice'a, indeks Jaccarda i dokładność. Attention Residual U-net może znacznie przyczynić się do uzyskania współczynnika Dice'a równego 90%, indeksu Jaccarda równego 76% i dokładności równiej 90%. Proponowany model został porównany z dwoma innymi metodami segmentacji piersi na tym samym zbiorze danych. Wyniki pokazują, że zmodyfikowany model U-net był w stanie osiągnąć dokładną segmentację zmian piersiowych na obrazach ultrasonograficznych piersi.
EN
Transfer Learning (TL) is a popular deep learning technique used in medical image analysis, especially when data is limited. It leverages pre-trained knowledge from State-Of-The-Art (SOTA) models and applies it to specific applications through Fine-Tuning (FT). However, fine-tuning large models can be time-consuming, and determining which layers to use can be challenging. This study explores different fine-tuning strategies for five SOTA models (VGG16, VGG19, ResNet50, ResNet101, and InceptionV3) pre-trained on ImageNet. It also investigates the impact of the classifier by usinga linear SVM for classification. The experiments are performed on four open-access ultrasound datasets related to breast cancer, thyroid nodules cancer, and salivary glands cancer. Results are evaluated using a five-fold stratified cross-validation technique, and metrics like accuracy, precision, and recall are computed. The findings show that fine-tuning 15% of the last layers in ResNet50 and InceptionV3 achieves good results. Using SVM for classification further improves overall performance by 6% for the two best-performing models. This research provides insights into fine-tuning strategiesandthe importance of the classifier in transfer learning for ultrasound image classification.
PL
Transfer Learning (TL) to popularna technika głębokiego uczenia stosowana w analizie obrazów medycznych, zwłaszcza gdy ilość danych jestograniczona. Wykorzystuje ona wstępnie wyszkoloną wiedzę z modeli State-Of-The-Art (SOTA) i zastosowanie ich do konkretnych aplikacji poprzez dostrajanie (Fine-Tuning –FT). Jednak dostrajanie dużych modeli może być czasochłonne, a określenie, których warstw użyć, może stanowić wyzwanie.W niniejszym badaniu przeanalizowano różne strategie dostrajania dla pięciu modeli SOTA (VGG16, VGG19, ResNet50, ResNet101 i InceptionV3) wstępnie wytrenowanych na ImageNet. Zbadano również wpływ klasyfikatora przy użyciu liniowej SVM do klasyfikacji. Eksperymenty przeprowadzonona czterech ogólnodostępnych zbiorach danych ultrasonograficznych związanych z rakiem piersi, rakiem guzków tarczycy i rakiemgruczołów ślinowych. Wyniki są oceniane przy użyciu techniki pięciowarstwowej walidacji krzyżowej, a wskaźniki takie jak dokładność, precyzja i odzyskiwanie są obliczane. Wyniki pokazują, że dostrojenie 15% ostatnich warstw w ResNet50 i InceptionV3 osiąga dobre wyniki. Użycie SVM do klasyfikacjidodatkowo poprawia ogólną wydajność o 6% dla dwóch najlepszych modeli. Badania te zapewniają informacje na temat strategii dostrajania i znaczenia klasyfikatoraw uczeniu transferowym dla klasyfikacji obrazów ultrasonograficznych.
EN
Many countries have adopted a public health approach that aims to address the particular challenges faced during the pandemic Coronavirus disease 2019 (COVID-19). Researchers mobilized to manage and limit the spread of the virus, and multiple artificial intelligence-based systems are designed to automatically detect the disease. Among these systems, voice-based ones since the virus have a major impact on voice production due to the respiratory system's dysfunction. In this paper, we investigate and analyze the effectiveness of cough analysis to accurately detect COVID-19. To do so, we distinguished positive COVID patients from healthy controls. After the gammatone cepstral coefficients (GTCC) and the Mel-frequency cepstral coefficients (MFCC) extraction, we have done the feature selection (FS) and classification with multiple machine learning algorithms. By combining all features and the 3-nearest neighbor (3NN) classifier, we achieved the highest classification results. The model is able to detect COVID-19 patients with accuracy and an f1-score above 98 percent. When applying FS, the higher accuracy and F1-score were achieved by the same model and the ReliefF algorithm, we lose 1 percent of accuracy by mapping only 12 features instead of the original 53.
EN
In 2019, the whole world is facing a health emergency due to the emergence of the coronavirus (COVID-19). About 223 countries are affected by the coronavirus. Medical and health services face difficulties to manage the disease, which requires a significant amount of health system resources. Several artificial intelligence-based systems are designed to automatically detect COVID-19 for limiting the spread of the virus. Researchers have found that this virus has a major impact on voice production due to the respiratory system's dysfunction. In this paper, we investigate and analyze the effectiveness of cough analysis to accurately detect COVID-19. To do so, we per-formed binary classification, distinguishing positive COVID patients from healthy controls. The records are collected from the Coswara Dataset, a crowdsourcing project from the Indian Institute of Science (IIS). After data collection, we extracted the MFCC from the cough records. These acoustic features are mapped directly to the Decision Tree (DT), k-nearest neighbor (kNN) for k equals to 3, support vector machine (SVM), and deep neural network (DNN), or after a dimensionality reduction using principal component analysis (PCA), with 95 percent variance or 6 principal components. The 3NN classifier with all features has produced the best classification results. It detects COVID-19 patients with an accuracy of 97.48 percent, 96.96 percent f1-score, and 0.95 MCC. Suggesting that this method can accurately distinguish healthy controls and COVID-19 patients.
EN
Heart diseases cause many deaths around the world every year, and his death rate makes the leader of the killer diseases. But early diagnosis can be helpful to decrease those several deaths and save lives. To ensure good diagnose, people must pass a series of clinical examinations and analyses, which make the diagnostic operation expensive and not accessible for everyone. Speech analysis comes as a strong tool which can resolve the task and give back a new way to discriminate between healthy people and person with cardiovascular diseases. Our latest paper treated this task but using a dysphonia measurement to differentiate between people with cardiovascular disease and the healthy one, and we were able to reach 81.5% in prediction accuracy. This time we choose to change the method to increase the accuracy by extracting the voiceprint using 13 Mel-Frequency Cepstral Coefficients and the pitch, extracted from the people's voices provided from a database which contain 75 subjects (35 has cardiovascular diseases, 40 are healthy), three records of sustained vowels (aaaaa…, ooooo… .. and iiiiiiii….) has been collected from each one. We used the k-near-neighbor classifier to train a model and to classify the test entities. We were able to outperform the previous results, reaching 95.55% of prediction accuracy.
EN
Cardiovascular disease is the leading cause of death worldwide. The diagnosis is made by non-invasive methods, but it is far from being comfortable, rapid, and accessible to everyone. Speech analysis is an emerging non-invasive diagnostic tool, and a lot of researches have shown that it is efficient in speech recognition and in detecting Parkinson's disease, so can it be effective for differentiating between patients with cardiovascular disease and healthy people? This present work answers the question posed, by collecting a database of 75 people, 35 of whom suffering from cardiovascular diseases, and 40 are healthy. We took from each one three vocal recordings of sustained vowels (aaaaa…, ooooo… .. and iiiiiiii… ..). By measuring dysphonia in speech, we were able to extract 26 features, with which we will train three types of classifiers: the k-near-neighbor, the support vectors machine classifier, and the naive Bayes classifier. The methods were tested for accuracy and stability, and we obtained 81% accuracy as the best result using the k-near-neighbor classifier.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.