Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  unsupervised domain adaptation
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Specific emitter identification (SEI) is the process of identifying individual emitters by analyzing the radio frequency emissions, based on the fact that each device contains unique hardware imperfections. While the majority of previous research focuses on obtaining features that are discriminative, the reliability of the features is rarely considered. For example, since device characteristics of the same emitter vary when it is operating at different carrier frequencies, the performance of SEI approaches may degrade when the training data and the test data are collected from the same emitters with different frequencies. To improve performance of SEI under varying frequency, we propose an approach based on continuous wavelet transform (CWT) and domain adversarial neural network (DANN). The proposed approach exploits unlabeled test data in addition to labeled training data, in order to learn representations that are discriminative for individual emitters and invariant for varying frequencies. Experiments are conducted on received signals of five emitters under three carrier frequencies. The results demonstrate the superior performance of the proposed approach when the carrier frequencies of the training data and the test data differ.
EN
An insufficient number or lack of training samples is a bottleneck in traditional machine learning and object recognition. Recently, unsupervised domain adaptation has been proposed and then widely applied for cross-domain object recognition, which can utilize the labeled samples from a source domain to improve the classification performance in a target domain where no labeled sample is available. The two domains have the same feature and label spaces but different distributions. Most existing approaches aim to learn new representations of samples in source and target domains by reducing the distribution discrepancy between domains while maximizing the covariance of all samples. However, they ignore subspace discrimination, which is essential for classification. Recently, some approaches have incorporated discriminative information of source samples, but the learned space tends to be overfitted on these samples, because they do not consider the structure information of target samples. Therefore, we propose a feature reduction approach to learn robust transfer features for reducing the distribution discrepancy between domains and preserving discriminative information of the source domain and the local structure of the target domain. Experimental results on several well-known cross-domain datasets show that the proposed method outperforms state-of-the-art techniques in most cases.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.