The purpose of this article is to consider a special class of combinatorial problems, the so called Prouhet-Tarry Escot problem, solution of which is realized by constructing finite sequences of ±1. For example, for fixed p∈N, is well known the existence of np∈N with the property: any set of np consecutive integers can be divided into 2 sets, with equal sums of its p[th] powers. The considered property remains valid also for sets of finite arithmetic progressions of complex numbers.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We consider the classical One-Dimensional Bin Packing Problem (1D-BPP), an NP-hard optimization problem, where, a set of weighted items has to be packed into one or more identical capacitated bins. We give an experimental study on using symmetry breaking constraints for strengthening the classical integer linear programming proposed to optimally solve this problem. Our computational experiments are conducted on the data-set found in BPPLib and the results have confirmed the theoretical results.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We generalize the well-known correspondence between partitions and equivalence relations on a set to the case of graphs and hypergraphs. This is motivated by the role that partitions and equivalence relations play in Rough Set Theory and the results provide some of the foundations needed to develop a theory of rough graphs. We use one notion of a partition of a hypergraph, which we call a graphical partition, and we show how these structures correspond to relations on a hypergraph having additional properties. In the case of a hypergraph with only nodes and no edges these properties are exactly the usual reflexivity, symmetry and transitivity properties required for equivalence relations on a set. We present definitions for upper and lower approximations of a subgraph with respect to a graphical partition. These generalize the well-known approximations in Rough Set Theory. We establish fundamental properties of our generalized approximations and provide examples of these constructions on some graphs.
4
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
A lot of problems in natural language processing can be interpreted using structures from discrete mathematics. In this paper we will discuss the search query and topic finding problem using a generic context-based approach. This problem can be described as a a Minimum Set Cover Problem with several constraints. The goal is to find a minimum covering of documents with the given context for a fixed weight function. The aim of this problem reformulation is a deeper understanding of both the hierarchical problem using union and cut as well as the non-hierarchical problem using the union. We thus choose a modeling using bipartite graphs and suggest a novel reformulation using an integer linear program as well as novel graph-theoretic approaches.
5
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We present arguments showing that the standard notion of the support of a probabilistic Borel measure is not well defined in every topological space. Our goal is to create a "very inseparable" space and to show the existence of a family of closed sets such that each of them is of full measure, but their intersection is empty. The presented classic construction is credited to Jean Dieudonné and dates back to 1939. We also propose certain, up to our best knowledge, new simplifications.
6
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In the last decades, several fragments of set theory have been studied in the context of Computable Set Theory. In general, the semantics of set-theoretic languages differs from the canonical first-order semantics in that the interpretation domain of set-theoretic terms is fixed to a given universe of sets. Because of this, theoretical results and various machinery developed in the context of first-order logic could be not easily applicable in the set-theoretic realm. Recently, the decidability of quantified fragments of set theory which allow one to explicitly handle ordered pairs has been studied, in view of applications in the field of knowledge representation. Among other results, a NEXPTIME decision procedure for satisfiability of formulae in one of these fragments, ∀0π , has been devised. In this paper we exploit the main features of such a decision procedure to reduce the satisfiability problem for the fragment ∀0π to the problem of Herbrand satisfiability for a first-order language extending it. In addition, it turns out that such a reduction maps formulae of the Disjunctive Datalog subset of ∀0π into Disjunctive Datalog formulae.
7
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The intention of the paper is towards a framework for developing, studying and comparing observational equivalences in the setting of a real-time true concurrent model. In particular, we introduce a family of trace and bisimulation equivalences in interleaving, step, partial order and causal net semantics in the setting of time Petri nets (elementary net systems whose transitions are labeled with time firing intervals, can fire only if their lower time bounds are attained, and are forced to fire when their upper time bounds are reached) [3]. We deal with the relationships between the equivalences showing the discriminating power of the approaches of the linear-time - branching-time and interleaving - partial order spectra and construct a hierarchy of equivalent classes of time Petri nets. This allows studying in complete detail the timing behaviour in addition to the degrees of relative concurrency and nondeterminism of processes.
8
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Reaction systems are a qualitative formalism for modeling systems of biochemical reactions characterized by the non-permanency of the elements: molecules disappear if not produced by any enabled reaction. Reaction systems execute in an environment that provides new molecules at each step. Brijder, Ehrenfeucht and Rozemberg introduced the idea of predictors. A predictor of a molecule s, for a given n, is the set of molecules to be observed in the environment to determine whether s is produced or not at step n by the system. We introduced the notion of formula based predictor, that is a propositional logic formula that precisely characterizes environments that lead to the production of s after n steps. In this paper we revise the notion of formula based predictor by defining a specialized version that assumes the environment to provide molecules according to what expressed by a temporal logic formula. As an application, we use specialized formula based predictors to give theoretical grounds to previously obtained results on a model of gene regulation.
9
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Rough sets offer a well-known approach to incomplete or imprecise data. In the paper I briefly report how this framework was successfully encoded by means of one of the leading computer proof-assistants in the world. The general approach is essentially based on binary relations, and all natural properties of approximation operators can be obtained via adjectives added to underlying relations. I focus on lattice-theoretical aspects of rough sets to enable the application of external theorem provers like EQP or Prover9 as well as to translate them into TPTP format widely recognized in the world of automated proof search. I wanted to have a clearly written, possibly formal, although informal as a rule, paper authored by a specialist from the discipline another than lattice theory. It appeared that Lattice theory for rough sets by Jouni Järvinen (called LTRS for short) was quite a reasonable choice to be a testbed for the current formalisation both of lattices and of rough sets. A popular computerised proof-assistant Mizar was used as a tool, hence all the efforts are available in one of the largest repositories of computer-checked mathematical knowledge, called Mizar Mathematical Library.
10
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This paper presents FPGA and softcore CPU based device for large datasets core calculation using rough set methods. Presented architectures have been tested on two real datasets by downloading and running presented solutions inside FPGA. Tested datasets had 1 000 to 10 000 000 objects. The same operations were performed in software implementation. Obtained results show the big acceleration in computation time using hardware supporting core generation in comparison to pure software implementation.
11
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this paper, we overview three closely related problems: Nelson-Hadwiger problem on coloring spaces with forbidden monochromatics distances; Borsuk's problem on partitioning sets in spaces into parts of smaller diameter; problem of finding codes with forbidden Hamming distances.
12
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In previous work we have presented a class of algebras enhanced with two contact relations representing rough set style approximations of a spatial contact relation. In this paper, we develop a class of relational systems which is mutually interpretable with that class of algebras, and we consider a relational logic whose semantics is determined by those relational systems. For this relational logic we construct a proof system in the spirit of Rasiowa-Sikorski, and we outline the proofs of its soundness and completeness.
13
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Calculation of similarity measures of exact matching texts is a critical task in the area of pattern matching that needs a great attention. There are many existing similarity measures in literature but the best methods do not exist for closeness measurement of two strings. The objective of this paper is to explore the grammatical properties and features of generalized n-gram matching technique of similarity measures to find exact text in electronic computer applications. Three new similarity measures have been proposed to improve the performance of generalized n-gram method. The new methods assigned high values of similarity measures and performance to price with low values of running time. The experiment with the new methods demonstrated that they are universal and very useful in words that could be derived from the word list as a group and retrieve relevant medical terms from database . One of the methods achieved best correlation of values for the evaluation of subjective examination.
14
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In the field of small universal Turing machines, Yurii Rogozhin holds a special prize: he was first to close off an infinite number of open questions by drawing a closed curve that separates the infinite set of Turing machines that are universal from a finite set of small machines for which we don’t yet know. Rogozhin did this by finding the smallest known universal Turing machines at the time, both in terms of number of states and number of symbols. This brief note summarises this and a few of Yurii’s other contributions to the field, including his work with Manfred Kudlek on small circular Post machines.
15
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Starting from a Theorem by Hall, we define the identity transform of a permutation π as C(π) = (0 + π(0), 1 + π(1), ..., (n - 1) + π(n - 1)), and we define the set Cn = {(C(π) : π ∈ Sn}, where Sn is the set of permutations of the elements of the cyclic group Zn. In the first part of this paper we study the set Cn: we show some closure properties of this set, and then provide some of its combinatorial and algebraic characterizations and connections with other combinatorial structures. In the second part of the paper, we use some of the combinatorial properties we have determined to provide a different algorithm for the proof of Hall's Theorem.
16
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We investigate Mark Titchener’s T-complexity, an algorithm which measures the information content of finite strings. After introducing the T-complexity algorithm, we turn our attention to a particular class of “simple” finite strings. By exploiting special properties of simple strings, we obtain a fast algorithm to compute the maximum T-complexity among strings of a given length, and our estimates of these maxima show that T-complexity differs asymptotically from Kolmogorov complexity. Finally, we examine how closely de Bruijn sequences resemble strings with high Tcomplexity.
17
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We consider the first-order theory of the monoid P(A*) of languages over a finite or infinite alphabet A (with at least two letters) endowed solely with concatenation lifted to sets: no set theoretical predicate or function, no constant. Coding a word u by the submonoid u* it generates, we prove that the operation (u*, v*) → (uv)* and the predicate {(u*,X) | ε ∈ X, u ∈ X} are definable in P(A*); •,=i. This allows to interpret the second-order theory of A*; •,=i in the first-order theory of P(A*); •,=i and prove the undecidability of the Π8 fragment of this last theory. These results involve technical difficulties witnessed by the logical complexity of the obtained definitions: the above mentioned predicates are respectively Δ5 and Δ7.
18
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Jan Śleszyński, a great mathematician, is considered a pioneer of Polish logic; however, he was not connected with the famous Warsaw School of Logic (WSL). He believed that his mission was a critical evaluation of work of other logicians in the field of foundations of mathematics and proof theory. Among his writings we find several notes regarding the work of Stanisław Leśniewski (the co-founder of the WSL) and his collective set theory. These remarks are the subject of investigation of the presented paper.
PL
Jan Śleszyński, wielki matematyk, uważany jest za pioniera polskiej logiki, chociaż nie był związany ze słynną Warszawską Szkołą Logiki (WSL). Śleszyński uważał, że jego misją była krytyczna ocena prac dotyczących podstaw matematyki oraz teorii dowodu autorstwa innych logików. Wśród jego zapisków znajdujemy między innymi uwagi dotyczące teorii zbiorów kolektywnych Stanisława Leśniewskiego, współzałożyciela WSL. Uwagi te są przedmiotem analizy niniejszego artykułu.
19
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This paper describes the philosophy of logic and mathematics in Poland in the years 1918‒1939. The special attention is attributed to the views developed in the Polish Mathematical School and the Warsaw School of Logic. The paper indicates various differences between mathematical circles in Warszawa, Lvov and Kraków.
PL
Artykuł opisuje filozofię logiki i matematyki w Polsce w latach 1918‒1939. Szczególną uwagę zwrócono na poglądy rozwinięte w Polskiej Szkole Matematycznej oraz Warszawskiej Szkole Logicznej. Artykuł wskazuje na rozmaite różnice pomiędzy środowiskami matematycznymi w Warszawie, Lwowie i Krakowie.
W naturalnym języku często funkcjonują nieprecyzyjnie pojęcia próbujące odzwierciedlać otaczającą rzeczywistość. Matematycy od dość dawna wskazywali, że sam rachunek prawdopodobieństwa nie jest w stanie uchwycić całego zakresu możliwych aspektów niepewności, a precyzyjniej ujmując niedoskonałości informacji. Powszechnie używane pojęcia i zwroty w języku naturalnym, np. ludzie wysocy, około 5, bardzo zimno, to wzorcowe terminy nieprecyzyjne (rozmyte). Granica pomiędzy stanem gdzie jest zimno i ciepło, ludzi wysokich i niskich nie jest skokowa, a niektóre zwartości opisujące konkretny wzrost czy temperaturę spełniają daną właściwość w różnym stopniu. Takie stopniowanie pomiędzy pełną przynależnością, częściową przynależnością i brakiem przynależności nie może być ujęte w klasycznej teorii zbiorów. Ograniczenia tradycyjnej teorii mnogości rozwiązuje teoria zbiorów rozmytych. W zagadnieniach geologiczno-złożowych funkcjonują często kategorie rozmyte. Używanie terminów takich jak: duże/małe złoże, bogata/uboga ruda, duża/mała miąższość serii, wysoka/niska zawartość siarki, dobre/złe właściwości flotacyjne i wiele innych jest naturalną cechą języka, narzędzia ekspresji w dążeniu do wyrażenia i wzajemnego porozumiewania się. Wszelkie cechy złoża wyrażane w sposób ilościowy (parametry złoża) mogą być w taki sposób werbalizowane. Czy złoże gazu ziemnego jest duże, gdy jest w nim co najmniej 100 mln m3? a może 99 mln m3 stanowi tę granicę? a może 98 mln m3, czy jest to jeszcze zupełnie inna wielkość zasobów. Próba ustalenia rozgraniczenia wydaje się zabawna i oczywiste jest, że będzie ona kłopotliwa, gdyż transformacja pojęć jakościowych w kierunku ilościowych nie zawsze jest bezpośrednia i bezproblemowa. W artykule przybliżono podstawową terminologię i generalne zasady modelowania rozmytego. Bazując na przykładzie złoża rud miedzi wskazano i zademonstrowano narzędzia nieprecyzyjnego opisu do pojęć i parametrów stosowanych w geologii złożowej.
EN
There is a lot of imprecise terms in natural language which are trying to reflect the surrounding reality. Mathematicians for a long time pointed out that the probability only is not able to grasp a whole range of uncertainty aspects and more precisely speaking, imperfect information. Commonly used terms and phrases in natural language such as tall men, about 5, very cold, are standard imprecise (fuzzy) concepts. The boundary between the state where it is cold and heat, high and low humans is not discrete, and some specific values for the temperature or height description match the specified property with varying degrees. Such gradation between the full membership, a partial membership and lack of membership may not be expressed in the classical set theory. Limitations of traditional set theory solve the theory of fuzzy sets. In the geological and mineral deposits issues fuzzy categories are often met. Using those terms like: large/small deposit, rich/lean ore, large/small deposit thickness, high/low sulfur content, good/bad flotation properties and many others, is a natural-structural feature of the language the tools of expression in the tendency to mutual agreement. Any mineral deposits features expressed in a quantitative way (deposit parameters) may be similar to the mentioned manner verbalized. Is natural gas field is large, when it has at least 100 million cubic meters? Or perhaps 99 million cubic meters is the limit? Or perhaps 98 million cubic meters, and maybe it is still quite different resources volume. Trying to determine distinction seems to be funny and it is obvious that it will be troublesome, since the qualitative concepts transformation into the quantitative direction is not always direct and easy. In the paper the basic terminology and general principles of fuzzy modeling has been approached. Basing on the copper ore example the tools of imprecise description has been indicated and demonstrated.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.