Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 5

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  probabilistic model checking
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Along with their modified versions, value iteration and policy iteration are well-known algorithms for the probabilistic model checking of Markov decision processes. One challenge with these methods is that they are time-consuming in most cases. Several techniques have been proposed to improve the performance of iterative methods for probabilistic model checking; however, the running times of these techniques depend on the graphical structure of the utilized model. In some cases, their performance can be worse than the performance of standard methods. In this paper, we propose two new heuristics for accelerating the modified policy iteration method. We first define a criterion for the usefulness of the computations of each iteration of this method. The first contribution of our work is to develop and use a criterion to reduce the number of iterations in modified policy iteration. As the second contribution, we propose a new approach for identifying useless updates in each iteration. This method reduces the running time of the computations by avoiding the useless updates of states. The proposed heuristics have been implemented in the PRISM model checker and applied on several standard case studies. We compare the running time of our heuristics with the running times of previous standard and improved methods. Our experimental results show that our techniques yields a significant speed-up.
2
Content available remote Probabilistic Timed Automata with Clock-Dependent Probabilities
EN
Probabilistic timed automata are classical timed automata extended with discrete probability distributions over edges. We introduce clock-dependent probabilistic timed automata, a variant of probabilistic timed automata in which transition probabilities can depend linearly on clock values. Clock-dependent probabilistic timed automata allow the modelling of a continuous relationship between time passage and the likelihood of system events. We show that the problem of deciding whether the maximum probability of reaching a certain location is above a threshold is undecidable for clock-dependent probabilistic timed automata. On the positive side, we show that the maximum and minimum probability of reaching a certain location in clock-dependent probabilistic timed automata can be approximated using a region-graph-based approach.
EN
Probabilistic models play an important role in many fields such as distributed systems and simulations. Like non-probabilistic systems, they can be synthesized using classical refinement-based techniques, but they also require identifying the probability distributions to be used and their parameters. Since a fully automated and blind refinement is generally undecidable, many works tried to synthesize them by looking for the parameters of the distributions. Syntax-guided synthesizing approaches are more powerful, they try to synthesize models structurally by using context-free grammars. However, many problems arise like huge search space, the complexity of generated models, and the limitation of context-free grammars to define constraints over the structure. In this paper, we propose a multi-step refinement approach, based on meta-models, offering several abstraction levels to reduce the size of the search space. More specifically, each refinement step is divided into two stages in which the desired shape of models is first described by context-sensitive constraints. In the second stage, model templates are instantiated by using global optimization techniques. We use our approach to a synthesize a set of optimal probabilistic models and show that context-sensitive constraints coupled with the multi-level abilities of the approach make the synthesis task more effective.
4
Content available remote Extrapolation of an Optimal Policy using Statistical Probabilistic Model Checking
EN
We present different ways of an approximate extrapolation of an optimal policy of a small model to that of a large equivalent of the model, which itself is too large to find its exact policy directly using probabilistic model checking (PMC). In particular, we obtain a global optimal resolution of non-determinism in several small Markov Decision Processes (MDP) or its extensions like Stochastic Multi-player Games (SMG) using PMC. We then use that resolution to form a hypothesis about an analytic decision boundary representing a respective policy in an equivalent large MDP/SMG. The resulting hypothetical decision boundary is then statistically approximately verified, if it is locally optimal and if it indeed represents a “good enough” policy. The verification either weakens or strengthens the hypothesis. The criterion of the optimality of the policy can be expressed in any modal logic that includes a version of the probabilistic operator P~p[·], and for which a PMC method exists.
EN
We describe a GPGPU–based Monte Carlo simulator integrated with Prism. It supports Markov chains with discrete or continuous time and a subset of properties expressible in PCTL, CSL and their variants extended with rewards. The simulator allows an automated statistical verification of results obtained using Prism’s formal methods.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.