Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 17

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  computer algorithms
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote Trying to Understand PEG
EN
Parsing Expression Grammar (PEG) encodes a recursive-descent parser with limited backtracking. Its properties are useful in many applications, but it is not well understood as a language definition tool. In its appearance, PEG is almost identical to a grammar in the Extended Backus-Naur Form (EBNF), and one may expect it to define the same language. But, due to the limited backtracking, PEG may reject some strings defined by EBNF, which gives an impression of PEG being unpredictable. We note that for some grammars, the limited backtracking is “efficient”, in the sense that it exhausts all possibilities. A PEG with efficient backtracking should therefore be easy to understand. There is no general algorithm to check if the grammar has efficient backtracking, but it can be often checked by inspection. The paper outlines an interactive tool to facilitate such inspection.
2
Content available remote Computing Bisimulation-Based Comparisons
EN
We provide the first algorithm with a polynomial time complexity, O((m + n)2n2), for computing the largest bisimulation-based auto-comparison of a labeled graph in the setting with counting successors, where m is the number of edges and n is the number of vertices. This setting is like the case with graded modalities in modal logics and qualified number restrictions in description logics. Furthermore, by using the idea of Henzinger et al. for computing simulations, we give an efficient algorithm, with complexity O((m + n)n), for computing the largest bisimulation-based auto-comparison and the directed similarity relation of a labeled graph in the setting without counting successors. We also adapt our former algorithm for computing the simulation pre-order of a labeled graph in the setting with counting successors.
3
Content available remote Analiza dokładności obliczeń hydraulicznych przesyłu gazu
PL
Istnieje wiele algorytmów matematycznych, których głównym zadaniem jest określenie spadku ciśnienia w gazociągu przesyłowym. W artykule zostały dokładnie przeanalizowane najczęściej stosowane na świecie algorytmy do obliczenia spadku ciśnienia w gazociągach przesyłowych, a uzyskane wyniki obliczeniowe porównano z rzeczywistymi. W artykule przedstawiono wpływ poszczególnych modułów obliczeniowych na dokładność wybranych modeli matematycznych do wyznaczenia spadku ciśnienia w gazociągach przesyłowych.
EN
There are many mathematical algorithms, whose main task is to determine the pressure drop in the transmission pipeline. In the article was carefully analyzed the most used in the world algorithms to calculate pressure drop in transmission pipelines, and the obtained results were compared with actual data. In the article present impact of deferent algorithms on the accuracy of the selected mathematical models to calculation of pressure drop in transmission pipelines.
4
Content available remote Twelve Years of QBF Evaluations : QSAT Is PSPACE-Hard and It Shows
EN
Twelve years have elapsed since the first Quantified Boolean Formulas (QBFs) evaluation was held as an event linked to SAT conferences. During this period, researchers have striven to propose new algorithms and tools to solve challenging formulas, with evaluations periodically trying to assess the current state of the art. In this paper, we present an experimental account of solvers and formulas with the aim to understand the progress in the QBF arena across these years. Unlike typical evaluations, the analysis is not confined to the snapshot of submitted solvers and formulas, but rather we consider several tools that were proposed over the last decade, and we run them on different formulas from previous QBF evaluations. The main contributions of our analysis, which are also the messages we would like to pass along to the research community, are: (i) many formulas that turned out to be difficult to solve in past evaluations, remain still challenging after twelve years, (ii) there is no single solver which can significantly outperform all the others, unless specific categories of formulas are considered, and (iii) effectiveness of preprocessing depends both on the coupled solver and the structure of the formula.
5
Content available remote Towards Deriving Conclusions from Cause-effect Relations
EN
In this work we propose an extension of logic programming, under the stable model semantics, and the action language BC where rule bodies and causal laws may contain a new kind of literal, that we call causal literal, that allows us to inspect the causal justifications of standard atoms. To this aim, we extend a recently proposed semantics where each atom belonging to a stable model is associated with a justification in the form of an algebraic expression (which corresponds to a logical proof built with rule labels). In particular, we use causal literals for evaluating and deriving new conclusions from statements like "A has been sufficient to cause B." We also use the proposed semantics to extend the action language BC with causal literals and, by some examples, show how this action language is useful for expressing a high level representation of some typical Knowledge Representation examples involving causal knowledge.
6
Content available remote Answer Set Programming Modulo Acyclicity
EN
Acyclicity constraints are prevalent in knowledge representation and applications where acyclic data structures such as DAGs and trees play a role. Recently, such constraints have been considered in the satisfiability modulo theories (SMT) framework, and in this paper we carry out an analogous extension to the answer set programming (ASP) paradigm. The resulting formalism, ASP modulo acyclicity, offers a rich set of primitives to express constraints related to recursive structures. In the technical results of the paper, we relate the new generalization with standard ASP by showing (i) how acyclicity extensions translate into normal rules, (ii) how weight constraint programs can be instrumented by acyclicity extensions to capture stability in analogy to unfounded set checking, and (iii) how the gap between supported and stable models is effectively closed in the presence of such an extension. Moreover, we present an efficient implementation of acyclicity constraints by incorporating a respective propagator into the stateof- the-art ASP solver CLASP. The implementation provides a unique combination of traditional unfounded set checking with acyclicity propagation. In the experimental part, we evaluate the interplay of these orthogonal checks by equipping logic programs with supplementary acyclicity constraints. The performance results show that native support for acyclicity constraints is a worthwhile addition, furnishing a complementary modeling construct in ASP itself as well as effective means for translation-based ASP solving.
7
Content available remote Maximizing T-complexity
EN
We investigate Mark Titchener’s T-complexity, an algorithm which measures the information content of finite strings. After introducing the T-complexity algorithm, we turn our attention to a particular class of “simple” finite strings. By exploiting special properties of simple strings, we obtain a fast algorithm to compute the maximum T-complexity among strings of a given length, and our estimates of these maxima show that T-complexity differs asymptotically from Kolmogorov complexity. Finally, we examine how closely de Bruijn sequences resemble strings with high Tcomplexity.
8
Content available remote Latency of Neighborhood Based Recommender Systems
EN
Latency of user-based and item-based recommenders is evaluated. The two algorithms can deliver high quality predictions in dynamically changing environments. However, their response time depends not only on the size, but also on the structure of underlying datasets. This constitutes a major drawback when compared to two other competitive approaches i.e. content-based and modelbased systems. Therefore, we believe that there exists a need for comprehensive evaluation of the latency of the two algorithms. During a typical worst case scenario analysis of collaborative filtering algorithms two assumption are made. The first assumption says that data are stored in dense collections. The second assumption states that large amount of computations can be performed in advance during the training phase. As a result it is advised to deploy user-based system when the number of users is relatively small. Item-based algorithms are believed to have better technical properties when the number of items is small. We consider a situation in which the two assumptions are not necessarily met. We show that even though the latency of the two methods depends heavily on the proportion of users to items, this factor does not differentiate the two methods. We evaluate the algorithms with several real-life datasets. We augment the analysis with both graph-theoretical and experimental techniques.
EN
The context of this work is the reconstruction of Petri net models for biological systems from experimental data. Such methods aim at generating all network alternatives fitting the given data. For a successful reconstruction, the data need to satisfy two properties: reproducibility and monotonicity. In this paper, we focus on a necessary preprocessing step for a recent reconstruction approach. We test the data for reproducibility, provide a feasibility test to detect cases where the reconstruction from the given data may fail, and provide a strategy to cope with the infeasible cases. After having performed the preprocessing step, it is guaranteed that the (given or modified) data are appropriate as input for the main reconstruction algorithm.
10
Content available remote Parameter Synthesis for Timed Kripke Structures
EN
We show how to synthesise parameter values under which a given property, expressed in a certain extension of CTL, called RTCTLP, holds in a parametric timed Kripke structure. We prove the decidability of parameter synthesis for RTCTLP by showing how to restrict the infinite space of parameter valuations to its finite subset and employ a brute-force algorithm. The bruteforce approach soon becomes intractable, therefore we propose a symbolic algorithm for RTCTLP parameter synthesis. Similarly to the fixed-point symbolic model checking approach, we introduce special operators which stabilise on the solution. The process of stabilisation is essentially a translation from the RTCTLP parameter synthesis problem to a discrete optimization task. We show that the proposedmethod is sound and complete and provide some complexity results. We argue that this approach leads to new opportunities in model checking, including the use of integer programming and related tools.
11
Content available remote Old and New Algorithms for Minimal Coverability Sets
EN
Many algorithms for computing minimal coverability sets for Petri nets prune futures. That is, if a newmarking strictly covers an old one, then not just the old marking but also some subset of its successor markings is discarded from search. In this publication, a simpler algorithm that lacks future pruning is presented and proven correct. Its performance is compared with future pruning. It is demonstrated, using examples, that neither approach is systematically better than the other. However, the simple algorithm has some attractive features. It never needs to re-construct pruned parts of the minimal coverability set. It automatically gives most of the advantage of future pruning, if the minimal coverability set is constructed in depth-first or most tokens first order, and if so-called history merging is applied. Some implementation aspects of minimal coverability set construction are also discussed. Some measurements are given to demonstrate the effect of construction order and other implementation aspects.
12
Content available remote Discovery of Cancellation Regions within Process Mining Techniques
EN
Process mining is a relatively new field of computer science which deals with process discovery and analysis based on event logs. In this work we consider the problem of discovering workflow nets with cancellation regions from event logs. Cancellations occur in the majority of real-life event logs. In spite of huge amount of process mining techniques little has been done on cancellation regions discovery. We show that the state-based region algorithm gives labeled Petri nets with overcomplicated control flow structure for logs with cancellations. We propose a novel method to discover cancellation regions from the transition systems built on event logs and show the way to construct equivalent workflow net with reset arcs to simplify the control flow structure.
13
EN
We present combinatorial algorithms for solving three problems that appear in the study of the degeneration order ≤degfor the variety of finite-dimensional modules over a k-algebra Δ, where M ≤deg N means that a module N belongs to an orbit closure O(M) of a module M in the variety of Δ-modules. In particular, we introduce algorithmic techniques for deciding whether or not the relation M ≤deg N holds and for determining all predecessors (resp. succesors) of a given module M with respect to ≤deg. The order ≤deg plays an important role in modern algebraic geometry and module theory. Applications of our technique and experimental tests for particular classes of algebras are presented. The results show that a computer algebra technique and algorithmic computer calculations provide important tools in solving theoretical mathematics problems of high computational complexity. The algorithms are implemented and published as a part of an open source GAP package called QPA.
14
Content available remote A Sweep-Line Method for Büchi Automata-based Model Checking
EN
The sweep-line method allows explicit state model checkers to delete states from memory on-the-fly during state space exploration, thereby lowering the memory demands of the verification procedure. The sweep-line method is based on a least-progress-first search order that prohibits the immediate use of standard on-the-fly Büchi automata-based model checking algorithms that rely on a depth-first search order in the search for an acceptance cycle. This paper proposes and experimentally evaluates an algorithm for Büchi automata-based model checking compatible with the search order and deletion of states prescribed by the sweep-line method.
EN
Focusing on novel database application scenarios, where data sets arise more and more in uncertain and imprecise formats, in this paper we propose a novel decomposition framework for efficiently computing and querying multidimensional OLAP data cubes over probabilistic data, which well-capture previous kind of data. Several models and algorithms supported in our proposed framework are formally presented and described in details, based on well-understood theoretical statistical/ probabilistic tools, which converge to the definition of the so-called probabilistic OLAP data cubes, the most prominent result of our research. Finally, we complete our analytical contribution by introducing an innovative Probability Distribution Function (PDF)-based approach, which makes use of well-known probabilistic estimators theory, for efficiently querying probabilistic OLAP data cubes, along with a comprehensive experimental assessment and analysis over synthetic probabilistic databases.
EN
In the paper, we focus on ant-based clustering time series data represented by means of the so-called delta episode information systems. A clustering process is made on the basis of delta representation of time series, i.e., we are interested in characters of changes between two consecutive data points in time series instead of original data points. Most algorithms use similarity measures to compare time series. In the paper, we propose to use a measure based on temporal rough set flow graphs. This measure has a probabilistic character and it is considered in terms of the Decision Theoretic Rough Set (DTRS) model. To perform ant-based clustering, the algorithm based on the versions proposed by J. Deneubourg, E. Lumer and B. Faieta as well as J. Handl et al. is used.
PL
W artykule zaprezentowano algorytmy mrówkowe wyznaczające największą klikę w grafie, za pomocą której modeluje się problem wyznaczania największego ze skupień wzajemnie połączonych elementów elektronicznych na płytce drukowanej w celu minimalizacji długości połączeń między nimi, a w konsekwencji minimalizacji ilości materiału zużytego na ich wytworzenie. W artykule zaprezentowano algorytm oparty na odmiennych aspektach zachowania się mrówek w porównaniu z dotychczas opracowanymi algorytmami. Główną różnicą między algorytmami jest faza eksploracji, która została wprowadzona w prezentowanym algorytmie. Opracowany algorytm porównano z Algorytmem 457 pod względem wyznaczanego wymiaru klik. Dokonano również porównania procedur lokalnego przeszukiwania (2,1)-wymiany i procedury opartej na metaheurystyce kolonii mrówek.
EN
In this paper an ANT algorithm, which is used to find a maximum group of mutually connected electronic elements in order to minimize the total length of connections, is presented. The new algorithm differs from algorithms which have been presented in scientific papers until now. The main difference is a phase of ANT exploration which is absent in other ANT algorithms. Sizes of maximum clique indicated by ANT algorithm and the Algorithm 457 are compared. The influence of the local search was presented also and the (2,1)-exchange local procedure and the ANT procedure of local search was compared.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.