Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 44

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  decision rules
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
PL
Artykuł przedstawia podejście do identyfikacji rodzaju szkła oparte na teorii zbiorów przybliżonych w programie RSES. Przedstawiono teoretyczne podstawy tej metody, opisano proces analizy danych oraz zaprezentowano wyniki identyfikacji rodzaju szkła.
EN
The article presents an approach to glass type identification based on rough set theory in the RSES program. The theoretical basis of this method is presented, the data analysis process is described and the results of glass type identification are presented.
EN
If we speak about the Smart City’s transport system, autonomous vehicles idea is the first thing that comes to mind. Today, it is strongly believed that the autonomous vehicles’ introduction into the traffic will increase the road safety. However, driverless cars are not the solution by itself. The road safety and, accordingly, sustainability will strongly depend on decision making algorithms inbuilt into the control module. Therefore, the goal of our research is to design and test the data mining algorithm based on Entity–Attribute–Value (EAV) model for decision making in the Intelligent System in the fully- or semi-autonomous vehicles. In this article, we describe the methodology to create 3 main modules of the designed Intelligent System: (1) an Object detection module; (2) a Data analysis module; (3) a Knowledge database built on decision rules generated with the help of our data mining algorithm. To build the Decision Table on the base of the real data, we have tested our algorithm on a simple collection of photos from a Polish two-lane road. Generated rules provide comparable classification results to the dynamic programming approach for optimization of decision rules relative to length or support. However, our decision making algorithm thanks to excluding the mistakes made on the object detection stage, works faster than existing ones with the same level of correctness.
EN
Selection is a key element of the cartographic generalisation process, often being its first stage. On the other hand it is a component of other generalisation operators, such as simplification. One of the approaches used in generalization is the condition-action approach. The author uses a condition-action approach based on three types of rough logics (Rough Set Theory (RST), Dominance-Based Rough Set Theory (DRST) and Fuzzy-Rough Set Theory (FRST)), checking the possibility of their use in the process of selecting topographic objects (buildings, roads, rivers) and comparing the obtained results. The complexity of the decision system (the number of rules and their conditions) and its effectiveness are assessed, both in terms of quantity and quality – through visual assessment. The conducted research indicates the advantage of the DRST and RST approaches (with the CN2 algorithm) due to the quality of the obtained selection, the greater simplicity of the decision system, and better refined IT tools enabling the use of these systems. At this stage, the FRST approach, which is characterised by the highest complexity of created rules and the worst selection results, is not recommended. Particular approaches have limitations resulting from the need to select appropriate measurement scales for the attributes used in them. Special attention should be paid to the selection of network objects, in which the use of only a condition-action approach, without maintaining consistency of the network, may not produce the desired results. Unlike approaches based on classical logic, rough approaches allow the use of incomplete or contradictory information. The proposed tools can (in their current form) find an auxiliary use in the selection of topographic objects, and potentially also in other generalisation operators.
EN
Parkinson’s disease (PD) is the second after Alzheimer’s most popular neurodegenerative disease (ND). Cures for both NDs are currently unavailable. OBJECTIVE: The purpose of our study was to predict the results of different PD patients’ treatments in order to find an optimal one. METHODS: We have compared rough sets (RS) and others, in short, machine learning (ML) models to describe and predict disease progression expressed as UPDRS values (Unified Parkinson’s Disease Rating Scale) in three groups of Parkinson’s patients: 23 BMT (Best Medical Treatment) patients on medication; 24 DBS patients on medication and on DBS therapy (Deep Brain Stimulation) after surgery performed during our study; and 15 POP (Postoperative) patients who had had surgery earlier (before the beginning of our research). Every PD patient had three visits approximately every six months. The first visit for DBS patients was before surgery. On the basis of the following condition attributes: disease duration, saccadic eye movement parameters, and neuropsychological tests: PDQ39 (Parkinson’s Disease Questionnaire - disease-specific health-related quality-of-life questionnaire), and Epworth Sleepiness Scale tests we have estimated UPDRS changes (as the decision attribute). RESULTS: By means of RS rules obtained for the first visit of BMT/DBS/POP patients, we have predicted UPDRS values in the following year (two visits) with global accuracy of 70% for both BMT visits; 56% for DBS, and 67%, 79% for POP second and third visits. The accuracy obtained by ML models was generally in the same range, but it was calculated separately for different sessions (MedOFF/MedON). We have used RS rules obtained in BMT patients to predict UPDRS of DBS patients; for the first session DBSW1: global accuracy was 64%, for the second DBSW2: 85% and the third DBSW3: 74% but only for DBS patients during stimulation-ON. ML models gave better accuracy for DBSW1/W2 session S1(MedOFF): 88%, but inferior results for session S3 (MedON): 58% and 54%. Both RS and ML could not predict UPDRS in DBS patients during stimulation-OFF visits because of differences in UPDRS. By using RS rules from BMT or DBS patients we could not predict UPDRS of POP group, but with certain limitations (only for MedON), we derived such predictions for the POP group from results of DBS patients by using ML models (60%). SIGNIFICANCE: Thanks to our RS and ML methods, we were able to predict Parkinson’s disease (PD) progression in dissimilar groups of patients with different treatments. It might lead, in the future, to the discovery of universal rules of PD progression and optimise the treatment.
EN
Knowledge of uncertainty in analytical results is of prime importance in assessments of compliance with requirements set out for the quality of water intended for human consumption. Assessments of drinking water quality can be per-formed using either a deterministic or a probabilistic method. In the former approach, every single result is referred directly to the parametric value, while in the probabilistic method uncertainty related to analytical results is taken into account during the decision-making process. In the present research, laboratory uncertainty and uncertainty deter-mined on the basis of results of analyses of duplicate samples collected in two Polish cities were compared and used in the probabilistic approach of water quality assessment. Using the probabilistic method, more results were considered to be “above the parametric value”. Most excesses were observed when the maximum allowable uncertainty as set out in the Regulation of the Minister of Health of 7 December 2017 was used, which is due to the highest values of these uncertainties. The lowest values above parametric values in the probabilistic approach were observed when measurement uncertainty was considered.
6
Content available remote Comparison of Heuristics for Optimization of Association Rules
EN
In this paper, seven greedy heuristics for construction of association rules are compared from the point of view of the length and coverage of constructed rules. The obtained rules are compared also with optimal ones constructed by dynamic programming algorithms. The average relative difference between length of rules constructed by the best heuristic and minimum length of rules is at most 4%. The same situation is with coverage.
PL
Artykuł stanowi kontynuację badań dotyczących zmodyfikowanego algorytmu dynamicznego programowania dla optymalizacji reguł decyzyjnych względem pokrycia. Praca przedstawia wyniki eksperymentalne dotyczące regułowego klasyfikatora, dla zbiorów danych umieszczonych w Repozytorium Uczenia Maszynowego.
EN
The article is a continuation of research connected with a modified dy-namic programming algorithm for optimization of decision rules relative to coverage. The paper contains experimental results for rule based classifier using data sets from UCI Machine Learning Repository.
PL
Celem artykułu jest wykorzystanie teorii zbiorów przybliżonych do indukcji reguł decyzyjnych warunkujących zastosowanie ekologicznej oceny cyklu życia w zidentyfikowanych modelach biznesowych MŚP. Wykorzystano w tym celu wyniki badania ankietowego przeprowadzonego w ramach projektu PARP „Wzorce zrównoważonej produkcji” oraz zdefiniowane typy modeli biznesu. W opracowaniu przedstawiono typologię modeli biznesu MŚP, dokonano klasyfikacji zmiennych na decyzyjne i warunkowe oraz wyznaczono reguły decyzyjne dla poszczególnych typów modeli, które prowadzą do zastosowania ekologicznej oceny cyklu życia.
EN
The aim of the paper is to use Rough Set approach to induce decision rules on LCA use in selected business models of SMEs. For that purpose the results of “Sustainable production patterns” PARP survey are used together with defined business model types. The typology of SME business models are presented in the paper, and is used to classify companies to different business model types. It is followed by development of condition attributes and decision attribute sets and induction of decision rules for different business model types.
9
Content available Vessels Route Planning Problem with Uncertain Data
EN
The purpose of this paper is to find a solution for route planning in a transport networks, where the costs of tracks, factor of safety and travel time are ambiguous. This approach is based on the Dempster-Shafer theory and well known Dijkstra's algorithm. In this approach important are the influencing factors of the mentioned coefficients using uncertain possibilities presented by probability intervals. Based on these intervals the quality intervals of each route can be determined. Applied decision rules can be described by the end user.
10
Content available remote Dynamic Programming Approach for Construction of Association Rule Systems
EN
In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.
EN
In the paper, an application of dynamic programming approach to global optimization of approximate association rules relative to coverage and length is presented. It is an extension of the dynamic programming approach to optimization of decision rules to inconsistent tables. Experimental results with data sets from UCI Machine Learning Repository are included.
PL
Omawiana jest nowa metoda indukcji reguł decyzyjnych, polegająca na dekompozycji, zbioru trenującego na podzbiory i poszukiwanie odpowiednio wówczas prostszych hipotez dla każdego z nich W rezultacie reguły decyzyjne mogą być generowane hierarchicznie, a proces indukcji reguł jest łatwiejszy gdyż operuje na tablicach decyzyjnych o mniejszych rozmiarach. Prostsze modele danych uzyskiwane w procesie dekompozycji zwiększają skuteczność indukcji oraz precyzję klasyfikacji nowych danych Istotną zaletą dekompozycji jest usprawnienie algorytmów stosowanych przy eksploracji rzeczywistych baz danych, co zostało potwierdzone eksperymentalna Dekompozycja stanowić może nowy sposób rozwiązywania problemów eksploracji danych wynikających z nadmiernej ich ilości i złożoności.
EN
A new method for induction of decision rules is presented It is based on subsequent decomposition of the set of training data into subsets and searching for hypotheses for each of these subsets As a result, decision rules are being induced hierarchically, and - because of the reduced size of decision tables - this process is less computationally intensive Simpler data models obtained in the process of decomposition make it possible to increase the efficiency of rule induction and the accuracy of new data classification. Such an approach leads to a significant improvement of algorithms used for exploration of real databases, which has been verified through experimental studies. The decomposicion can, therefore, be seen as a new, efficient method for overcoming problems in exploration of databases, resulting from excessive volumes and high complexity of data.
13
Content available remote Relationships Between Length and Coverage of Decision Rules
EN
The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.
EN
This paper discusses issues related to incomplete information databases and considers a logical framework for rule generation. In our approach, a rule is an implication satisfying specified constraints. The term incomplete information databases covers many types of inexact data, such as non-deterministic information, data with missing values, incomplete information or interval valued data. In the paper, we start by defining certain and possible rules based on non-deterministic information. We use their mathematical properties to solve computational problems related to rule generation. Then, we reconsider the NIS-Apriori algorithm which generates a given implication if and only if it is either a certain rule or a possible rule satisfying the constraints. In this sense, NIS-Apriori is logically sound and complete. In this paper, we pay a special attention to soundness and completeness of the considered algorithmic framework, which is not necessarily obvious when switching from exact to inexact data sets. Moreover, we analyze different types of non-deterministic information corresponding to different types of the underlying attributes, i.e., value sets for qualitative attributes and intervals for quantitative attributes, and we discuss various approaches to construction of descriptors related to particular attributes within the rules' premises. An improved implementation of NIS-Apriori and some demonstrations of an experimental application of our approach to data sets taken from the UCI machine learning repository are also presented. Last but not least, we show simplified proofs of some of our theoretical results.
EN
Decision rules are commonly used tool for classification and knowledge discovery in data. The aim of this paper is to provide decision rule-based framework for analysis of survival data and apply it in mining of data describing patients after bone marrow transplantation. The paper presents a rule induction algorithm which uses sequential covering strategy and rule quality measures. An extended version of the algorithm gives the possibility of taking into account user’s requirements in the form of predefined rules and attributes which should be included in the final rule set. Additionally, in order to summarize the knowledge expressed by rule-based model, we propose the rule filtration algorithm which consists in selection of statistically significant rules describing the most disjoint parts of the entire data set. Selected rules are identified with so-called survival patterns. The survival patterns are rules which conclusions contain Kaplan-Meier estimates of survival function. In this way, the paper combines rule-based data classification and description with survival analysis. The efficiency of our method is illustrated with the analysis of data describing patients after bone marrow transplantation.
EN
The paper presents an algorithm of decision rules redefinition that is based on evaluation of the importance of elementary conditions occurring in induced rules. Standard and simplified (heuristic) indices of elementary condition importance evaluation are described. There is a comparison of the results obtained by both indices concerning classifiers quality and elementary condition rankings estimated by the indices. The efficiency of the proposed algorithm has been verified on 21 benchmark data sets. Moreover, an analysis of practical applications of the proposed methods for biomedical and medical data analysis is presented. The obtained results show that the redefinition reduces considerably a rule set needed to describe each decision class. Additionally, after the rule set redefinition negated elementary conditions may also occur in new rules.
17
Content available remote Classifiers Based on Optimal Decision Rules
EN
Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification – exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).
18
Content available remote CHIRA-Convex Hull Based Iterative Algorithm of Rules Aggregation
EN
In the paper we present CHIRA, an algorithm performing decision rules aggregation. New elementary conditions, which are linear combinations of attributes may appear in rule premises during the aggregation, leading to so-called oblique rules. The algorithm merges rules iteratively, in pairs, according to a certain order specified in advance. It applies the procedure of determining convex hulls for regions in a feature space which are covered by aggregated rules. CHIRA can be treated as the generalization of rule shortening and joining algorithms which, unlike them, allows a rule representation language to be changed. Application of presented algorithm allows one to decrease a number of rules, especially in the case of data in which decision classes are separated by hyperplanes not perpendicular to the attribute axes. Efficiency of CHIRA has been verified on rules obtained by two known rule induction algorithms, RIPPER and q-ModLEM, run on 18 benchmark data sets. Additionally, the algorithm has been applied on synthetic data as well as on a real-life set concerning classification of natural hazards in hard-coal mines.
EN
In order to handle inconsistencies in ordinal and monotonic information systems, several relaxed versions of the Dominance-based Rough Set Approach (DRSA) have been proposed, e.g., VC-DRSA. These versions use special consistency measures to admit some inconsistent objects in the lower approximations. The minimal consistency level that has to be attained by objects included in the lower approximations is defined using a prior knowledge or a trial-and-error procedure. In order to avoid dependence on prior knowledge, an alternative way of handling inconsistencies is to iteratively eliminate the most inconsistent objects (according to some measure) until the information system becomes consistent. This idea is a base of a new method of handling inconsistencies presented in this paper and called TIPStoC. The TIPStoC algorithm is illustrated by an example from the area of telecommunication and the efficiency of the new method is proved by a computational experiment.
20
Content available remote Incremental rule-based learners for handling concept drift: an overview
EN
Learning from non-stationary environments is a very popular research topic. There already exist algorithms that deal with the concept drift problem. Among them there are online or incremental learners, which process data instance by instance. Their knowledge representation can take different forms such as decision rules, which have not received enough attention in learning with concept drift. This paper reviews incremental rule-based learners designed for changing environments. It describes four of the proposed algorithms: FLORA, AQ11-PM+WAH, FACIL and VFDR. Those four solutions can be compared on several criteria, like: type of processed data, adjustment to changes, type of the maintained memory, knowledge representation, and others.
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.