Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 1

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  data poisoning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Federated learning (FL) involves joint model training by various devices while preserving the privacy of their data. However, it presents a challenge of dealing with heterogeneous data located on participating devices. This issue can further be complicated by the appearance of malicious clients, aiming to sabotage the training process by poisoning local data. In this context, a problem of differentiating between poisoned and non-identically-independently-distributed (non-IID) data appears. To address it, a technique utilizing data-free synthetic data generation is proposed, using a reverse concept of adversarial attack. Adversarial inputs allow for improving the training process by measuring clients’ coherence and favoring trustworthy participants. Experimental results, obtained from the image classification tasks for MNIST, EMNIST, and CIFAR-10 datasets are reported and analyzed.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.