The human brain is able to learn language by processing written or spoken language. Recently, several deep neural networks have been successfully used for natural language generation. Although it is possible to train such networks, it remains unknown how these networks (or the brain) actually process language. A scalable method for distributed storage and recall of sentences within a neural network is presented. A corpus of 59 million words was used for training. A system using this method can efficiently identify sentences that can be considered reasonable replies to an input sentence. The system first selects a small number of seeds words which occur with low frequency in the corpus. These seed words are then used to generate answer sentences. Possible answers are scored using statistical data also obtained from the corpus. A number of sample answers generated by the system are shown to illustrate how the method works.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The generation of narrative texts poses particular difficulties with respect to the interplay of description and narration, the recounting and interpretation of events from different perspectives, and the interweaving of dialogue and narration. Starting from previous work within the VINCI natural language generation environment, we show how a model of perspective and aspect proposed in the 1960's by the linguist Andre Burger allows for a significant enrichment of generated narratives in these three respects.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.