Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  natural language inference
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote From machine translated NLI corpus to universal sentence representations in Czech
EN
Natural language inference (NLI) is a sentence-pair classification task w.r.t. the entailment relation. As already shown, certain deep learning architectures for NLI task - InferSent in particular - may be exploited for obtaining (supervised) universal sentence embeddings. Although InferSent approach to sentence embeddings has been recently outperformed in different tasks by transformer-based architectures (like BERT and its derivatives), it still remains a useful tool in many NLP areas and it also serves as a strong baseline. One of the greatest advantages of this approach is its relative simplicity. Moreover, in contrast to other approaches, the training of InferSent models can be performed on a standard GPU within hours. Unfortunately, the majority of research on sentence embeddings in general is done in/for English, whereas other languages are apparently neglected. In order to fill this gab, we propose a methodology for obtaining universal sentence embeddings in another language - arising from training InferSent-based sentence encoders on machine translated NLI corpus and present a transfer learning use-case on semantic textual similarity in Czech.
2
Content available remote Improving utilization of lexical knowledge in natural language inference
EN
Natural language inference (NLI) is a central problem in natural language processing (NLP) of predicting the logical relationship between a pair of sentences. Lexical knowledge, which represents relations between words, is often important for solving NLI problems. This knowledge can be accessed by using an external knowledge base (KB), but this is limited to when such a resource is accessible. Instead of using a KB, we propose a simple architectural change for attention based models. We show that by adding a skip connection from the input to the attention layer we can utilize better the lexical knowledge already present in the pretrained word embeddings. Finally, we demonstrate that our strategy allows to use an external source of knowledge in a straightforward manner by incorporating a second word embedding space in the model.
EN
In this paper, we show how a rich lexico-semantic network which Has been built using serious games, JeuxDeMots, can help us in grounding our semantic ontologies in doing formal semantics using rich or modern type theories (type theories within the tradition of Martin Löf). We discuss the issue of base types, adjectival and verbal types, hyperonymy/hyponymy relations as well as more advanced issues like homophony and polysemy. We show how one can take advantage of this wealth of lexical semantics in a formal compositional semantics framework. We argue that this is a way to sidestep the problem of deciding what the type ontology should look like once a move to a many sorted type system has been made. Furthermore, we show how this kind of information can be extracted from a lexico-semantic Network like JeuxDeMots and inserted into a proof-assistant like Coq in order to perform reasoning tasks.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.