Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!

Znaleziono wyników: 1

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  semantic textual similarity
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote From machine translated NLI corpus to universal sentence representations in Czech
EN
Natural language inference (NLI) is a sentence-pair classification task w.r.t. the entailment relation. As already shown, certain deep learning architectures for NLI task - InferSent in particular - may be exploited for obtaining (supervised) universal sentence embeddings. Although InferSent approach to sentence embeddings has been recently outperformed in different tasks by transformer-based architectures (like BERT and its derivatives), it still remains a useful tool in many NLP areas and it also serves as a strong baseline. One of the greatest advantages of this approach is its relative simplicity. Moreover, in contrast to other approaches, the training of InferSent models can be performed on a standard GPU within hours. Unfortunately, the majority of research on sentence embeddings in general is done in/for English, whereas other languages are apparently neglected. In order to fill this gab, we propose a methodology for obtaining universal sentence embeddings in another language - arising from training InferSent-based sentence encoders on machine translated NLI corpus and present a transfer learning use-case on semantic textual similarity in Czech.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.