Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 9

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote Universal Query Language for Unified State Model
EN
Unified State Model (USM) is a single data model that allows conveying objects of major programming languages and databases. USM exploits and emphasizes common properties of their data models. USM is equipped with mappings from these data models onto it. With USM at hand, we have faced the next natural research question whether numerous query languages for the data subsumed by USM can be clearly mapped onto a common language. We have designed and proposed such a language called the Unified Query Language (UQL). UQL is intended to be a minimalistic and elegant query language that allows expressing queries of languages of data models covered by USM. In this paper we define UQL and its concise set of operators. Next we conduct a mild introduction into UQL features by showing examples of SQL and ODMG OQL queries and their mapping onto UQL. We conclude by presenting the mapping of the theoretical foundations of these two major query languages onto UQL. They are the multiset relational algebra and the object query algebra. This is an important step towards the establishment of a fully-fledged common query language for USM and its subsumed data models.
2
Content available remote Query Rewriting Based on Meta-Granular Aggregation
EN
Analytic database queries are exceptionally time consuming. Decision support systems employ various execution techniques in order to accelerate such queries and reduce their resource consumption. Probably the most important of them consists in materialization of partial results. However, any introduction of derived objects into the database schema increases the cost of software development, since programmers must take care of their usage and synchronization. In this article we consider using partial aggregations materialized in additional tables. The idea is based on the concept of metagranules that represent the information on grouping and used aggregations. Metagranules have a natural partial order that guides the optimisation process. We present solutions to two problems. Firstly, we assume that a set of stored metagranules is given and we optimize a query. We present a novel query rewriting method to make analytic queries use the information stored in metagranules. We also describe our proof-of-concept implementation of this method and perform an extensive experimental evaluation using databases of the size up to 0:5 TiB and 6 billions rows. Secondly, we assume that a database workload is given and we want to select the optimal set of metagranules to materialize. Although each metagranule accelerates some queries, it also imposes a significant overhead on updates. Therefore, we propose a cost model that includes both benefits for queries and penalties for updates. We experiment with the complete search in the space of sets of metagranules to find the optimum. Finally, we empirically verify identified optimal sets against database instances up to 0:5 TiB with billions of rows and hundreds millions of aggregated rows.
3
Content available remote A Bi-objective Optimization Framework for Heterogeneous CPU/GPU Query Plans
EN
Graphics Processing Units (GPU) have significantly more applications than just rendering images. They are also used in general-purpose computing to solve problems that can benefit from massive parallel processing. However, there are tasks that either hardly suit GPU or fit GPU only partially. The latter class is the focus of this paper. We elaborate on hybrid CPU/GPU computation and build optimization methods that seek the equilibrium between these two computation platforms. The method is based on heuristic search for bi-objective Pareto optimal execution plans in presence of multiple concurrent queries. The underlying model mimics the commodity market where devices are producers and queries are consumers. The value of resources of computing devices is controlled by supply-and-demand laws. Our model of the optimization criteria allows finding solutions of problems not yet addressed in heterogeneous query processing. Furthermore, it also offers lower time complexity and higher accuracy than other methods.
4
Content available On Visual Assessment of Software Quality
EN
Development and maintenance of understandable and modifiable software is very challenging. Good system design and implementation requires strict discipline. The architecture of a project can sometimes be exceptionally difficult to grasp by developers. A project’s documentation gets outdated in a matter of days. These problems can be addressed using software analysis and visualization tools. Incorporating such tools into the process of continuous integration provides a constant up-to-date view of the project as a whole and helps keeping track of what is going on in the project. In this article we describe an innovative method of software analysis and visualization using graph-based approach. The benefits of this approach are shown through experimental evaluation in visual assessment of software quality using a proof-of-concept implementation — the Magnify tool.
5
Content available remote One Graph to Rule Them All Software Measurement and Management
EN
The software architecture is typically defined as the fundamental organization of the system embodied in its components, their relationships to one another and to the system's environment. It also encompases principles governing the system's design and evolution. In order to manage the architecture of a large software system the architect needs a holistic model that supports continuous integration and verification for all system artifacts. In earlier papers we proposed a unified graph-based approach to the problem of managing knowledge about the architecture of a software system. In this paper we demonstrate that this approach facilitates convenient and efficient project measurement. First, we show how existing software metrics can be translated into our model in a way that is independent of the programming language. Second, we introduce new metrics that cross the programming language boundaries and are easily implementable using our approach. We conclude by demonstrating how the new model can be implemented using existing tools. In particular, graph databases are a convenient implementation of an architectural repository. Graph query languages and graph algorithms are an effective way to define metrics and specialized graph views.
6
Content available remote Update Propagator for Joint Scalable Storage
EN
In recent years, the scalability of web applications has become critical. Web sites get more dynamic and customized. This increases servers’ workload. Furthermore, the future increase of load is difficult to predict. Thus, the industry seeks for solutions that scale well. With current technology, almost all items of system architectures can be multiplied when necessary. There are, however, problems with databases in this respect. The traditional approach with a single relational database has become insufficient. In order to achieve scalability, architects add a number of different kinds of storage facilities. This could be error prone because of inconsistencies in stored data. In this paper we present a novel method to assemble systems with multiple storages. We propose an algorithm for update propagation among different storages like multi-column, key-value, and relational databases.
7
Content available remote The Impedance Mismatch in Light of the Unified State Model
EN
In this paper we discuss the misunderstanding that have arisen over the years around the broadly defined term of the object-relational impedance mismatch. It occurs in various aspects of database application programming. There are three concerns judged the most important: mismatching data models, mismatching binding times and mismatching object lifecycle. This paper focuses on the data model mismatch. We introduce the common state theory, i.e. a unified model of objects in popular programming languages and databases. The proposed model exploits and emphasizes common properties of all these objects. Using our model we demonstrate that there are notably more similarities than differences. We conclude that the impact of the mismatch of data models can be significantly reduced.
8
Content available remote Types and Type Checking in Stack-Based Query Languages
EN
In this report we propose a new approach to types and static type checking in object-oriented database query and programming languages. In contrast to typical approaches to types which involve very advanced mathematical concepts we present a type system from the practitioners' point of view. We argue that many features of current object-oriented query/programming languages, such as ellipses, automatic coercions and irregularities in data structures, cause that very formal type systems are irrelevant to practical situations. We treat types as some syntactic qualifiers (tokens or some structures of tokens) attached to objects, procedures, modules and other data/program entities. Such syntactic qualifiers we call signatures. We avoid the simpleminded notion that a type has some internal semantics e.g. as a set of values. In our assumptions a type inference system is based on predefined decision tables involving signatures and producing type checking decisions, which can be the following: (1) type error, (2) new signature, (3) dereference, coercion and/or delegation of a type check to run-time. A type inference decision table is to be developed for every query/programming operator. Type inferences are implied by the stack-based approach (SBA) to object-oriented query/programming languages. Static type checking is just a compile time simulation of the run-time computation. Thus the type checker is based on data structures that statically model run-time structures and processes, that is: (1) metabase (internal representation of a database schema, a counterpart of an object store), (2) static environment stack (a counterpart of run-time environment stack), (3) static result stack (a counterpart of run-time result stack) and (4) type inference decision tables (a counterpart of run-time computations). Then, we present the static type check procedure which is driven by the metabase, the static stacks and the type inference decision tables. To discover several type errors in one run we show how to correct some type errors during the type check. Finally we present our prototype implementation showing that our approach is feasible and efficient with moderate implementation effort.
PL
W raporcie proponujemy nowe podejście do typów i statycznej kontroli typologicznej w obiektowych językach zapytań/programowania. W przeciwieństwie do podejść wykorzystujących zaawansowane koncepcje matematyczne prezentujemy tu pozycję praktyków. Wiele cech obecnych języków zapytań/programowania, takich jak elipsy, automatyczne koercje oraz nieregularności struktur danych, powodują, że bardzo formalne systemy typologiczne nie odpowiadają praktyce. Proponujemy typy jako syntaktyczne kwalifikatory (znaki lub struktury znaków) przypisane do obiektów, procedur oraz innych bytów programistycznych. Takie kwalifikatory nazwaliśmy sygnaturami. Unikamy popularnego punktu widzenia, w którym typ posiada wewnętrzną semantykę, np. w postaci zbioru wartości. System wnioskowania o typie jest oparty na tabelach decyzyjnych działających na sygnaturach i generujących decyzje w zakresie kontroli typologicznej, które mogą być następujące: (1) błąd typologiczny, (2) nowa sygnatura, (3) dereferencja, koercja i/lub oddelegowanie kontroli typu do czasu wykonania. Tablice decyzyjne powinny być sporządzone dla każdego operatora występującego w zapytaniach/programach. Wnioskowanie o typie jest implikowane przez podejście stosowe (SBA) do obiektowych języków zapytań/programowania. Statyczna kontrola typologiczna symuluje podczas kompilacji tę sytuację, która zajdzie podczas czasu wykonania. Stąd kontroler typów jest oparty na strukturach danych, które statycznie modelują struktury i procesy czasu wykonania, tj.: (1) metabaza (wewnętrzna reprezentacja schematu, odpowiednik składu obiektów), (2) statyczny stos środowiskowy (odpowiednik stosu środowiskowego), (3) statyczny stos rezultatów (odpowiednik stosu rezultatów), (4) tablice decyzyjne wnioskowania o typie (odpowiednik operatorów). Następnie prezentujemy procedurę statycznej kontroli typów, której działanie jest oparte na metabazie, statycznych stosach i tabelach decyzyjnych. Aby wykryć wiele błędów typologicznych w jednym przebiegu pokazujemy, jak należy skorygować pewne błędy typologiczne podczas kontroli typologicznej. Na końcu prezentujemy prototypową implementację pokazującą, że nasze podejście jest osiągalne i efektywne przy umiarkowanym wysiłku implementacyjnym.
9
Content available remote Data-Intensive Grid Computing Based on Updatable Views
EN
In this report we propose a new approach to integration of distributed heterogeneous resources on the basis of a canonical object-oriented database model, a query language and updateable database views. Views are used as wrappers/mediators on top of local servers and as a data integration facility for global applications. Views support location, implementation and replication transparency. Because views are defined in a high-level query language, the mechanism is much more abstract and flexible in comparison to e.g. CORBA or Web Services. The report presents a short introduction to the query language SBQL and updateable views. It also presents the architecture of the grid network based on updateable views, and simple examples illustrating the mechanism. In the report we also show how the mechanism can be used in peer-to-peer networks. We shortly describe the process of grid design and development. Finally, we present some issues related to grid metadata.
PL
W raporcie proponujemy nowe podejście do integracji rozproszonych, heterogenicznych zasobów. Podejście to oparte jest na kanonicznym obiektowym modelu danych, języku zapytań i aktualizowalnych perspektywach bazy danych. Perspektywy są tutaj używane jako osłony/mediatory, przez które lokalne serwery udostępniają swoje zasoby, oraz jako element, który integruje zasoby i udostępnia je globalnym aplikacjom. Perspektywy wspierają przezroczystość położenia, implementacji oraz replikacji. Perspektywy są zdefiniowane w języku wysokiego poziomu i w związku z tym są one bardziej uniwersalne niż np. CORBA czy Web Services. W raporcie przedstawiamy krótkie wprowadzanie do języka zapytań SBQL i aktualizowanych perspektyw zdefiniowanych na bazie tego języka. Prezentujemy także architekturę sieci grid opartą na aktualizowanych perspektywach i przedstawiamy parę przykładów ilustrujących jej działanie. W raporcie pokazujemy jak prezentowany mechanizm może być wykorzystany w sieciach typu peer-topeer, a następnie krótko omawiamy jak wygląda proces projektowania i implementacji gridu. Na koniec w raporcie prezentujemy zagadnienia związane z metadanymi w gridzie.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.