Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 28

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  formal concept analysis
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
1
Content available remote Continuous Domains in Formal Concept Analysis
EN
Formal Concept Analysis (FCA) has been proven to be an effective method of restructuring complete lattices and various algebraic domains. In this paper, the notion of contractive mappings over formal contexts is proposed, which can be viewed as a generalization of interior operators on sets into the framework of FCA. Then, by considering subset-selections consistent with contractive mappings, the notions of attribute continuous formal contexts and continuous concepts are introduced. It is shown that the set of continuous concepts of an attribute continuous formal context forms a continuous domain, and every continuous domain can be restructured in this way. Moreover, the notion of F-morphisms is identified to produce a category equivalent to that of continuous domains with Scott continuous functions. The paper also investigates the representations of various subclasses of continuous domains including algebraic domains and stably continuous semilattices.
2
Content available remote From Data to Pattern Structures : Near Set Approach
EN
Pattern structures were introduced by Ganter and Kuznetsov in the framework of formal concept analysis (FCA) as a mean to direct analysis of objects having complex descriptions, e.g., descriptions presented in the form of graphs instead of a set of properties. Pattern structures actually generalise/replace the original FCA representation of the initial information about objects, that is, formal contexts (which form a special type of data tables); as a consequence, pattern structures are regarded in FCA as given (in some sense a priori to the analysis) rather than built (a posteriori) from data. The main goal of this paper is twofold: firstly, we would like to export the idea of pattern structures to and consistently with the framework/methodology of rough set theory (RST); secondly, we want to derive pattern structures from simple data tables rather than to regard them as the initial information about objects. To this end we present and discuss two methods of generating non-trivial pattern structures from simple information systems/tables. Both methods are inspired by near set theory, which is a methodology theoretically close to rough set theory, but developed in the topological settings of (descriptive) nearness of sets. Interestingly, these methods bear formal connections to other ideas from RST such as generalised decisions or symbolic value grouping.
EN
The theory of Formal Concept Analysis (FCA) provides efficient methods for conceptualization of formal contexts. The methods of FCA are applied mainly on the field of knowledge engineering and data mining. The key element in FCA applications is the generation of a concept set. The main goal of this research work is to develop an efficient incremental method for the construction of concept sets. The incremental construction method is used for problems where context may change dynamically. The paper first proposes a novel incremental concept set construction algorithm called ALINC, where the insertion loop runs over the attribute set. The combination of object-level context processing and ALINC is an object level incremental algorithm (OALINC) where the context is built up object by object. Based on the performed tests, OALINC dominates the other popular batch or incremental methods for sparse contexts. For dense contexts, the OINCLOSE method, which uses the InClose algorithm for processing of reduced contexts, provides a superior efficiency. Regarding the OALINC/OINCLOSE algorithms, our test results with uniform distribution and real data sets show that our method provides very good performance in the full investigated parameter range. Especially good results are experienced for symmetric contexts in the case of word clustering using context-based similarity.
4
Content available remote Algebras of Definable Sets vs. Concept Lattices
EN
The paper is aimed at comparing Rough Set Theory (RST) and Formal Concept Analysis (FCA) with respect to algebraic structures of concepts appearing in both theories, namely algebras of definable sets and concept lattices. The paper presents also basic ideas and concepts of RST and FCA together with some set theoretical concepts connected with set spaces which can serve as a convenient platform for a comparison of RST and FCA. In the last section there are shown necessary and sufficient conditions for the fact, that families of definable sets and concept extents determined by the same formal contexts are equal. This in finite cases is equivalent to an isomorphism of respective structures and generally reflects a very specific situation when both theories give the same conceptual hierarchies.
5
Content available remote Rough Fuzzy Concept Analysis
EN
We provide a new approach to fusion of Fuzzy Formal Concept Analysis and Rough Set Theory. As a starting point we take into account a couple of fuzzy relations, one of them represents the lower approximation, while the other one the upper approximation of a given data table. By defining appropriate concept-forming operators we transfer the roughness of the input data table to the roughness of corresponding formal fuzzy concepts in the sense that a formal fuzzy concept is considered as a collection of objects accompanied with two fuzzy sets of attributes— those which are shared by all the objects and those which at least one object has. In the paper we study the properties of such formal concepts and show their relationship with concepts formed by well-known isotone and antitone operators.
EN
In recent years, FCA has received significant attention from research communities of various fields. Further, the theory of FCA is being extended into different frontiers and augmented with other knowledge representation frameworks. In this backdrop, this paper aims to provide an understanding of the necessary mathematical background for each extension of FCA like FCA with granular computing, a fuzzy setting, interval-valued, possibility theory, triadic, factor concepts and handling incomplete data. Subsequently, the paper illustrates emerging trends for each extension with applications. To this end, we summarize more than 350 recent (published after 2011) research papers indexed in Google Scholar, IEEE Xplore, ScienceDirect, Scopus, SpringerLink, and a few authoritative fundamental papers.
7
Content available remote Tolerances Induced by Irredundant Coverings
EN
In this paper, we consider tolerances induced by irredundant coverings. Each tolerance R on U determines a quasiorder .≤R by setting x .≤R y if and only if R(x) ⊆ R(y). We prove that for a tolerance R induced by a covering H of U, the covering H is irredundant if and only if the quasiordered set (U,.≤R ) is bounded by minimal elements and the tolerance R coincides with the product .≤R ◦ .≤R . We also show that in such a case H = {↑m | m is minimal in (U,.≤R )}, and for each minimal m, we have R(m) = ↑m. Additionally, this irredundant covering H inducing R consists of some blocks of the tolerance R. We give necessary and sufficient conditions under which H and the set of R-blocks coincide. These results are established by applying the notion of Helly numbers of quasiordered sets.
PL
Cel: Zaprezentowanie rozwiązania problemu segmentacji tekstu dziedzinowego. Badany tekst pochodził z raportów (formularza „Informacji ze zdarzenia”, pola „Dane opisowe do informacji ze zdarzenia”) sporządzanych po akcjach ratowniczo-gaśniczych przez jednostki Państwowej Straży Pożarnej. Metody: W celu realizacji zadania autor zaproponował metodę projektowania bazy wiedzy oraz reguł segmentatora regułowego. Zaproponowana w artykule metoda opiera się na formalnej analizie pojęć. Zaprojektowana według proponowanej metody baza wiedzy oraz reguł umożliwiła przeprowadzenie procesu segmentacji dostępnej dokumentacji. Poprawność i skuteczność proponowanej metody zweryfikowano poprzez porównanie jej wyników z dwoma innymi rozwiązaniami wykorzystywanymi do segmentacji tekstu. Wyniki: W ramach badań i analiz opisano oraz pogrupowano reguły i skróty występujące w badanych raportach. Dzięki zastosowaniu formalnej analizy pojęć utworzono hierarchię wykrytych reguł oraz skrótów. Wydobyta hierarchia stanowiła zarazem bazę wiedzy oraz reguł segmentatora regułowego. Przeprowadzone eksperymenty numeryczne i porównawcze autorskiego rozwiązania z dwoma innymi rozwiązaniami wykazały znacznie lepsze działanie tego pierwszego. Przykładowo otrzymane wyniki F-miary otrzymane w wyniku zastosowania proponowanej metody wynoszą 95,5% i są lepsze o 7-8% od pozostałych dwóch rozwiązań. Wnioski: Zaproponowana metoda projektowania bazy wiedzy oraz reguł segmentatora regułowego umożliwia projektowanie i implementację oprogramowania do segmentacji tekstu z małym błędem podziału tekstu na segmenty. Podstawowa reguła dotycząca wykrywania końca zdania poprzez interpretację kropki i dodatkowych znaków jako końca segmentu w rzeczywistości, zwłaszcza dla tekstów specjalistycznych, musi być opakowana dodatkowymi regułami. Działania te znacznie podnoszą jakość segmentacji i zmniejszają jej błąd. Do budowy i reprezentacji takich reguł nadaje się przedstawiona w artykule formalna analiza pojęć. Wiedza inżyniera oraz dodatkowe eksperymenty mogą wzbogacać utworzoną sieć o nowe reguły. Nowo wprowadzana wiedza może zostać w łatwy sposób naniesiona na aktualnie utworzoną sieć semantyczną, tym samym przyczyniając się do polepszenia segmentacji tekstu. Ponadto w ramach eksperymentu numerycznego wytworzono unikalny: zbiór reguł oraz skrótów stosowanych w raportach, jak również zbiór prawidłowo wydzielonych i oznakowanych segmentów.
EN
Objective: Presentation of a specialist text segmentation technique. The text was derived from reports (a form “Information about the event”, field “Information about the event - descriptive data”) prepared by rescue units of the State Fire Service after firefighting and rescue operations. Methods: In order to perform the task the author has proposed a method of designing the knowledge base and rules for a text segmentation tool. The proposed method is based on formal concept analysis (FCA). The knowledge base and rules designed by the proposed method allow performing the segmentation process of the available documentation. The correctness and effectiveness of the proposed method was verified by comparing its results with the other two solutions used for text segmentation. Results: During the research and analysis rules and abbreviations that were present in the studied specialist texts were grouped and described. Thanks to the formal concepts analysis a hierarchy of detected rules and abbreviations was created. The extracted hierarchy constituted both a knowledge and rules base of tools for segmentation of the text. Numerical and comparative experiments on the author's solution with two other methods showed significantly better performance of the former. For example, the F-measure results obtained from the proposed method are 95.5% and are 7-8% better than the other two solutions. Conclusions: The proposed method of design knowledge and rules base text segmentation tool enables the design and implementation of software with a small error divide the text into segments. The basic rule to detect the end of a sentence by the interpretation of the dots and additional characters as the end of the segment, in fact, especially in case of specialist texts, must be packaged with additional rules. These actions will significantly improve the quality of segmentation and reduce the error. For the construction and representation of such rules is suitable presented in the article, the formal concepts analysis. Knowledge engineering and additional experiments can enrich the created hierarchy by the new rules. The newly inserted knowledge can be easily applied to the currently established hierarchy thereby contributing to improving the segmentation of the text. Moreover, within the numerical experiment is made unique: a set of rules and abbreviations used in reports and set properly separated and labeled segments.
9
Content available remote Vector-based Attribute Reduction Method for Formal Contexts
EN
Attribute reduction is one basic issue in knowledge discovery of information systems. In this paper, based on the object oriented concept lattice and classical concept lattice, the approach of attribute reduction for formal contexts is investigated. We consider attribute reduction and attribute characteristics from the perspective of linear dependence of vectors. We first introduce the notion of context matrix and the operations of corresponding column vectors, then present some judgment theorems of attribute reduction for formal contexts. Furthermore, we propose a new method to reducing formal context and show corresponding reduction algorithms. Compared with previous reduction approaches which employ discernibility matrix and discernibility function to determine all reducts, the proposed approach is more simpler and easier to implement.
10
Content available remote Outlier Detection by Interaction with Domain Experts
EN
We present a method for improving the detection of outlying Fire Service's reports based on domain knowledge and dialogue with Fire & Rescue domain experts. The outlying report is considered as an element which is significantly different from the remaining data. We follow the position of Professor Andrzej Skowron that effective algorithms in data mining and knowledge discovery in big data should incorporate an interaction with domain experts or/and be domain oriented. Outliers are defined and searched on the basis of domain knowledge and dialogue with experts. We face the problem of reducing high data dimensionality without loosing specificity and real complexity of reported incidents. We solve this problem by introducing a knowledge based generalization level intermediating between analyzed data and experts domain knowledge. In our approach we use the Formal Concept Analysis methods for both generation of the appropriate categories from data and as tools supporting communication with domain experts. We conducted two experiments in finding two types of outliers in which outlier detection was supported by domain experts.
11
Content available remote Set-theoretic Approaches to Granular Computing
EN
A framework is proposed for studying a particular class of set-theoretic approaches to granular computing. A granule is a subset of a universal set, a granular structure is a family of subsets of the universal set, and relationship between granules is given by the standard set-inclusion relation. By imposing different conditions on the family of subsets, we can define several types of granular structures. A number of studies, including rough set analysis, formal concept analysis and knowledge spaces, adopt specific models of granular structures. The proposed framework therefore provides a common ground for unifying these studies. The notion of approximations is examined based on granular structures.
12
Content available remote Row and Column Spaces of Matrices over Residuated Lattices
EN
We present results regarding row and column spaces of matrices whose entries are elements of residuated lattices. In particular, we define the notions of a row and column space for matrices over residuated lattices, provide connections to concept lattices and other structures associated to such matrices, and show several properties of the row and column spaces, including properties that relate the row and column spaces to Schein ranks of matrices over residuated lattices. Among the properties is a characterization of matrices whose row (column) spaces are isomorphic. In addition, we present observations on the relationships between results established in Boolean matrix theory on one hand and formal concept analysis on the other hand.
13
Content available remote Fixing Generalization Defects in UML Use Case Diagrams
EN
Use case diagrams appear early within a UML-based development, structured over the concepts of actors and use cases to capture user requirements of an application. Good modeling practices suggest that use case diagrams should be simple and easy-to-read, two goals that can be achieved by introducing relevant generalizations of actors and use cases. The approach presented in this paper allows, using Formal Concept Analysis and one of its variants, Relational Concept Analysis, to refactor a use case diagram as a whole in order to make it clearer while respecting the semantics of the original diagram. The relevancy of this approach has been confirmed by its implementation as a tool and the results obtained from its application on several representative diagrams
14
Content available remote Computing Implications with Negation from a Formal Context
EN
The objective of this article is to define an approach towards generating implications with (or without) negation when only a formal context K = (G,M, I) is provided. To that end, we define a two-step procedure which first (i) computes implications whose premise is a key in the context K| K representing the apposition of the context K and its complementary �K with attributes in M (negative attributes), and then (ii) uses an inference axiom we have defined to produce the whole set of implications.
15
Content available remote Computing Formal Concepts by Attribute Sorting
EN
We present a novel approach to compute formal concepts of formal context. In terms of operations with Boolean matrices, the presented algorithm computes all maximal rectangles of the input Boolean matrix which are full of 1s. The algorithm combines basic ideas of previous approaches with our recent observations on the influence of attribute permutations and attribute sorting on the number of formal concepts which are computed multiple times. As a result, we present algorithm which computes formal concepts by successive context reduction and attribute sorting. We prove its soundness, discuss its complexity and efficiency, and show that it outperforms other algorithms from the CbO family in terms of substantially lower numbers of formal concepts which are computed multiple times.
PL
W artykule opisano proces projektowania systemu ekstrakcji informacji SEI. Projektowanie tego systemu bazuje na regułach oraz zastosowaniu formalnej analizy pojęć do ich odpowiedniego ułożenia w bazie wiedzy opisywanego systemu.
EN
This article describes a design process of information extraction system IES. The proposed projecting method is based on rules and formal concept analysis.
17
Content available remote Knowledge discovery in data using formal concept analysis and random projections
EN
In this paper our objective is to propose a random projections based formal concept analysis for knowledge discovery in data. We demonstrate the implementation of the proposed method on two real world healthcare datasets. Formal Concept Analysis (FCA) is a mathematical framework that offers a conceptual knowledge representation through hierarchical conceptual structures called concept lattices. However, during the design of a concept lattice, complexity plays a major role.
18
Content available remote Relational Contexts and Relational Concepts
EN
Formal concept analysis (FCA) is a mathematical description and theory of concepts implied in formal contexts. And the current formal contexts of FCA aim to model the binary relations between individuals (objects) and attributes in the real world. In the real world we usually describe each individual by some attributes, which induces the relations between individuals and attributes. But there also exist many relations between individuals, for instance, the parent-children relation in a family. In this paper, to model the relations between individuals in the real world, we propose a new context - relational context for FCA, which contains a set U of objects and a binary relation r on U. Corresponding to the formal concepts in formal contexts, we present different kinds of relational concepts in relational contexts, which are the pairs of sets of objects. First we define the standard relational concepts in relational contexts. Moreover, we discuss the indirect relational concepts and negative relational concepts in relational contexts, which aim to concern the indirection and negativity of the relations in relational contexts, respectively. Finally, we define the hybrid relational concepts in relational contexts, which are the combinations of any two different kinds of relational concepts. In addition, we also discuss the application of relational contexts and relational concepts in the supply chain management field.
EN
Fuzzy formal concept analysis is concernedwith formal contexts expressing scalar-valued fuzzy relationships between objects and their properties. Existing fuzzy approaches assume that the relationship between a given object and a given property is a matter of degree in a scale L (generally [0,1]). However, the extent to which "object o has property a" may be sometimes hard to assess precisely. Then it is convenient to use a sub-interval from the scale L rather than a precise value. Such formal contexts naturally lead to interval-valued fuzzy formal concepts. The aim of the paper is twofold. We provide a sound minimal set of algebraic requirements for interval-valued implications in order to fulfill the fuzzy closure properties of the resulting Galois connection. Secondly, a new approach based on a generalization of Gödel implication is proposed for building the complete lattice of all interval-valued fuzzy formal concepts.
20
Content available remote Normalized-scale Relations and Their Concept Lattices in Relational Databases
EN
Formal Concept Analysis (FCA) is a valid tool for data mining and knowledge discovery, which identifies conceptual structures from (formal) contexts. As many practical applications involve non-binary data, non-binary attributes are introduced via a many-valued context in FCA. In FCA, conceptual scaling provides a complete framework for transforming any many-valued context into a context, in which each non-binary attribute is given a scale, and the scale is a context. Each relation in relational databases is a many-valued context of FCA. In this paper, we provide an approach toward normalizing scales, i.e., each scale can be represented by a nominal scale and/or a set of statements. One advantage of normalizing scales is to avoid generating huge (binary) derived relations. By the normalization, the concept lattice of a derived relation is reduced to a combination of the concept lattice of a derived nominal relation and a set of statements. Hence, without transforming a relation into a derived relation, one can not only determine concepts of the derived relation from concepts of given scales, but also determine concepts of the derived relation from concepts of a derived nominal relation and a set of statements. The connection between the concept lattice of a derived nominal relation and the concept lattice of a derived relation is also considered.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.