Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  representation learning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote Improving utilization of lexical knowledge in natural language inference
EN
Natural language inference (NLI) is a central problem in natural language processing (NLP) of predicting the logical relationship between a pair of sentences. Lexical knowledge, which represents relations between words, is often important for solving NLI problems. This knowledge can be accessed by using an external knowledge base (KB), but this is limited to when such a resource is accessible. Instead of using a KB, we propose a simple architectural change for attention based models. We show that by adding a skip connection from the input to the attention layer we can utilize better the lexical knowledge already present in the pretrained word embeddings. Finally, we demonstrate that our strategy allows to use an external source of knowledge in a straightforward manner by incorporating a second word embedding space in the model.
EN
Breast cancer has high incidence rate compared to the other cancers among women. This disease leads to die if it does not diagnosis early. Fortunately, by means of modern imaging procedure such as MRI, mammography, thermography, etc., and computer systems, it is possible to diagnose all kind of breast cancers in a short time. One type of BC images is histology images. They are obtained from the entire cut-off texture by use of digital cameras and contain invaluable information to diagnose malignant and benign lesions. Recently by requesting to use the digital workflow in surgical pathology, the diagnosis based on whole slide microscopy image analysis has attracted the attention of many researchers in medical image processing. Computer aided diagnosis (CAD) systems are developed to help pathologist make a better decision. There are some weaknesses in histology images based CAD systems in compared with radiology images based CAD systems. As these images are collected in different laboratory stages and from different samples, they have different distributions leading to mismatch of training (source) domain and test (target) domain. On the other hand, there is the great similarity between images of benign tumors with those of malignant. So if these images are analyzed undiscriminating, this leads to decrease classifier performance and recognition rate. In this research, a new representation learning-based unsupervised domain adaptation method is proposed to overcome these problems. This method attempts to distinguish benign extracted feature vectors from those of malignant ones by learning a domain invariant space as much as possible. This method achieved the average classification rate of 88.5% on BreaKHis dataset and increased 5.1% classification rate compared with basic methods and 1.25% with state-of-art methods.
3
Content available remote Towards Learning Word Representation
EN
Continuous vector representations, as a distributed representations for words have gained a lot of attention in Natural Language Processing (NLP) field. Although they are considered as valuable methods to model both semantic and syntactic features, they still may be improved. For instance, the open issue seems to be to develop different strategies to introduce the knowledge about the morphology of words. It is a core point in case of either dense languages where many rare words appear and texts which have numerous metaphors or similies. In this paper, we extend a recent approach to represent word information. The underlying idea of our technique is to present a word in form of a bag of syllable and letter n-grams. More specifically, we provide a vector representation for each extracted syllable-based and letter-based n-gram, and perform concatenation. Moreover, in contrast to the previous method, we accept n-grams of varied length n. Further various experiments, like tasks-word similarity ranking or sentiment analysis report our method is competitive with respect to other state-of-theart techniques and takes a step toward more informative word representation construction.
4
Content available remote On Certain Limitations of Recursive Representation Model
EN
There is a strong research eort towards developing models that can achieve state-of-the-art results without sacricing interpretability and simplicity. One of such is recently proposed Recursive Random Support Vector Machine (R2SVM) model, which is composed of stacked linear models. R2SVM was reported to learn deep representations outperforming many strong classi-ers like Deep Convolutional Neural Network. In this paper we try to analyze it both from theoretical and empirical perspective and show its important limitations. Analysis of similar model Deep Representation Extreme Learning Machine (DrELM) is also included. It is concluded that models in its current form achieves lower accuracy scores than Support Vector Machine with Radial Basis Function kernel.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.