Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 1

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Adversarial Attacks
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Adversarial Attacks are actions that aims to mislead models by introducing subtle and often imperceptible changes in model’s input. Providing resilience for such kind of risk is key for all Natural Language Processing (NLP) task specific models. Current state of the art solution for one of NLP task Named Entity Recognition (NER) is usage of transformer based solutions. Previous solution where based on Conditional Random Fields (CRF).This research aims to investigate and compare the robustness of both transformer-based and CRF-based NER models against adversarial attacks. By subjecting these models to carefully crafted perturbations, we seek to understand how well they can withstand attempts to manipulate their input and compromise their performance. This comparative analysis will provide valuable insights into the strengths and weaknesses of each architecture, shedding light on the most effective strategies for enhancing the security and reliability of NER systems.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.