Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  semi supervised learning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Acoustic features of speech are promising as objective markers for mental health monitoring. Specialized smartphone apps can gather such acoustic data without disrupting the daily activities of patients. Nonetheless, the psychiatric assessment of the patient’s mental state is typically a sporadic occurrence that takes place every few months. Consequently, only a slight fraction of the acoustic data is labeled and applicable for supervised learning. The majority of the related work on mental health monitoring limits the considerations only to labeled data using a predefined ground-truth period. On the other hand, semi-supervised methods make it possible to utilize the entire dataset, exploiting the regularities in the unlabeled portion of the data to improve the predictive power of a model. To assess the applicability of semi-supervised learning approaches, we discuss selected state-of-the-art semi-supervised classifiers, namely, label spreading, label propagation, a semi-supervised support vector machine, and the self training classifier. We use real-world data obtained from a bipolar disorder patient to compare the performance of the different methods with that of baseline supervised learning methods. The experiment shows that semi-supervised learning algorithms can outperform supervised algorithms in predicting bipolar disorder episodes.
EN
Deep convolutional neural networks have shown eminent performance in medical image segmentation in supervised learning. However, this success is predicated on the availability of large volumes of pixel-level labeled data, making these approaches impractical when labeled data is scarce. On the other hand, semi-supervised learning utilizes pertinent information from unlabeled data along with minimal labeled data, alleviating the demand for labeled data. In this paper, we leverage the mixup-based risk minimization operator in a student-teacher-based semi-supervised paradigm along with structure-aware constraints to enforce consistency coherence among the student predictions for unlabeled samples and the teacher predictions for the corresponding mixup sample by significantly diminishing the need for labeled data. Besides, due to the intrinsic simplicity of the linear combination operation used for generating mixup samples, the proposed method stands at a computational advantage over existing consistency regularization-based SSL methods. We experimentally validate the performance of the proposed model on two public benchmark datasets, namely the Left Atrial (LA) and Automatic Cardiac Diagnosis Challenge (ACDC) datasets. Notably, on the LA dataset’s lowest labeled data set-up (5%), the proposed method significantly improved the Dice Similarity Coefficient and the Jaccard Similarity Coefficient by 1.08% and 1.46%, respectively. Furthermore, we demonstrate the efficacy of the proposed method with a consistent improvement across various labeled data proportions on the aforementioned datasets.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.