Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 9

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  cache memory
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote On Tight Separation for Blum Measures Applied to Turing Machine Buffer Complexity
EN
We formulate a very general tight diagonalization method for the Blum complexity measures satisfying two additional axioms related to our diagonalizer machine. We apply this method to two new, mutually related, distance and buffer complexities of Turing machine computations which are important nontrivial examples of natural Blum complexity measures different from time and space. In particular, these measures capture how many times the worktape head needs to move a certain distance during the computation which corresponds to the number of necessary block uploads into a buffer cache memory. We start this study by proving a tight separation which shows that a very small increase in the distance or buffer complexity bound (roughly from f(n) to f(n + 1)) brings provably more computational power to both deterministic and nondeterministic Turing machines even for unary languages. We also obtain hierarchies of the distance and buffer complexity classes.
EN
The problem of modeling different parts of computer systems requires accurate statistical tools. Cache memory systems is an inherent part of nowadays computer systems, where the memory hierarchical structure plays a key point role in behavior and performance of the whole system. In the case of Windows operating systems, cache memory is a place in memory subsystem where the I/O system puts recently used data from disk. In paper some preliminary results about statistical behavior of one selected system counter behavior are presented. Obtained results shown that the real phenomena, which have appeared during human-computer interaction, can be expressed in terms of non-extensive statistics that is related to Tsallis proposal of new entropy definition.
3
Content available Obliczeniowe szacowanie czasu wykonania programu
PL
Określenie czasu wykonywania programu poprzez jego uruchomienie nie zawsze jest możliwe w zagadnieniach praktycznych, przykładowo w kompilacji iteracyjnej, ze względu na duże wydłużenie czasu tworzenia oprogramowania. Jednakże w wielu sytuacjach nie ma potrzeby dokładnego określenia tego czasu; wystarczyłoby go oszacować. W niniejszym artykule przedstawiono propozycję sposobu obliczeniowego szacowania czasu wykonania programu w oparciu o samą postać jego kodu źródłowego i znane parametry środowiska sprzętowego.
EN
The program execution time is one of criteria which are taken into account during assessment of widely comprehended software quality. The general purpose is to make program execution time as short as possible. The program execution time depends on many, very different, factors. The most obvious of these are: the form of its source code and the hardware environment in which the program is executed. In practice, even a very minor change in the form of the source code of a program can result in a significant change in its execution time. The same effect can be caused by a slight change in the values of hardware parameters. Although the interpretation of program execution time as a quality assessment criterion is very simple, it is sometimes very difficult to precisely measure program execution and taking necessary measurements requires running the program. However, there is very often no need to know this time precisely; it would be sufficient to estimate it with some error which is known in advance. The paper presents - using the matrix multiplication problem for reference - a proposal of a method which can be used for estimating the execution time of a program, based only on its source code and a priori known hardware parameters. The idea of the proposed method is to elaborate a mathematical model combining statistical approach and the Wolfe's method for calculating data locality. The paper discusses the results of using the elaborated model on a control sample and indicates directions of further works.
PL
Krytycznym czynnikiem warunkującym wydajność obliczeniową oprogramowania jest lokalność dostępu do danych. Dlatego oczekuje się od narzędzi kompilacji automatyzacji procesu przekształcenia nieoptymalnego kodu do postaci charakteryzującej się wysoką lokalnością danych. W artykule przedstawiono podejście pozwalające na oszacowanie lokalności danych programów na podstawie kodu źródłowego w języku ANSI-C. Omówiono wyniki przeprowadzonych badań eksperymentalnych oraz wskazano kierunki dalszych prac.
EN
Good data locality, comprehended as such placement of program data in memory that program data requested by the processor are available immediately on demand, is a critical software requirement for achieving high efficiency in data processing. One of the ways to achieve good data locality is to transform source codes at the compilation stage so as to improve their usage of the cache memory and, thus, fully benefit from the concept of memory hierarchy. Modern compilers are expected to carry out this kind of optimization automatically, by adopting relevant transformations. In order to select the transformation which is best for this purpose for a given source code, the compiler should be able to compare, from this point of view, the available transformations and indicate the one that produces a semantically identical code of the shortest execution time possible. The paper briefly describes Wolfe's method of esti-mating data locality based on calculations carried out directly on the source code under analysis, without any need to carry out time consuming compilation of the source code to its executable form and to collect memory access metrics at run time. The paper also presents in outline how the authors implemented in C++ a software module estimating data locality for ANSI-C source codes based on Wolfe's method. The paper discusses the results of adopting the proposed approach to some selected source codes and indicates directions of further works.
EN
In order to effectively use cache memory, it is essential to ensure good data locality at the cache memory level. This can be achieved by appropriately transforming the source code of a program to a semantically equivalent form. The problem is, however, how – based only on the form of the source code of a program – to assess the data locality it involves and apply this assessment for selection of the source code of the shortest execution time. The paper presents Wolfe’s method of estimating data locality and - using the matrix multiplication problem for reference – discusses the possibilities of applying Wolfe’s method for the purpose of estimating the program execution time. The paper also presents software prepared by the authors and dedicated for estimating data locality.
EN
Program execution time is one of the criteria taken into account during assessment of software quality. It is sometimes very difficult to precisely measure this time and carrying out necessary measurements requires running the program. However, there is very often no need to know this time precisely; it would be sufficient to estimate it with some error known in advance. The paper presents the proposal and assumptions of a model for estimating the time of execution of a program, based only on its source code. The paper introduces a sample statistical model which can be used for this purpose. It was created based on empirical data collected for the matrix multiplication problem. The paper also presents an analysis of possibilities of applying the above-mentioned statistical model to some other programs.
PL
W artykule omówiono problem lokalności danych oraz zaprezentowano istniejące techniki zwiększania lokalności danych polegające na transformacji kodu zródłowego pętli w celu lepszego wykorzystania możliwości pamięci podręcznej procesora. Zaprezentowano również koncepcję metody zwiększania lokalności danych na poziomie pamięci podręcznej opartej na znanych transformacjach pętli programowych oraz obliczeniowo-doświadczalnej analizie metryk lokalności danych. Przedstawiono model koncepcyjny modułu programowego implementujacego uzyskiwane wyniki badań.
EN
This paper presents in outline the idea of hierarchical organization of memory, focusing on cache memory. It also discusses in brief popular software techniques and approaches which can be used in order to more greatly benefit from the specific nature and potential of cache memory. In this context, one presents herein the conception of a new method for shortening the execution time of various executable programs. The new method aims at increasing data locality at the cache memory level, based on transforms of program loops. A proposal of applying the new method in practice is described herein as well.
8
Content available remote The Knapsack-Lightening problem and its application to scheduling HRT tasks
EN
In hard real-time systems timeliness is as important as functional correctness. Such systems contain so called hard real-time tasks (HRT tasks) which must be finished by a given deadline. One of the methods of scheduling of HRT tasks is periodic loading introduced by Schweitzer, Dror, and Trudeau. The paper presents an extension to that method which allows for deterministic utilization of cache memory in hard real-time systems. It is based on a new version of the Knapsack problem named Knapsack-Lightening. In the paper the Knapsack-Lightening problem is defined, its complexity is analyzed, and an exact algorithm along with two heuristics are presented. More-over the application of the Knapsack-Lightening problem to scheduling HRT tasks is described.
PL
W niniejszym artykule autorzy dokonują przeglądu istniejących algorytmów klasyfikacji pakietów celem adaptacji najodpowiedniejszego spośród nich dla potrzeb budowanego systemu zabezpieczeń sieciowych klasy Firewall. Równocześnie prezentują koncepcje zwiększenia całkowitej wydajności proponowanego rozwiązania poprzez zastosowanie dodatkowych mechanizmów wykorzystujących m.in. pamięci podręczne, potokowość oraz zrównoleglenie przetwarzania danych.
EN
In this paper authors present their research into the actual state of the hardware implemented packet classification algorithms for the adaptation into their implementation of the hardware Firewall security system. The paper also describes the idea of enhancing the overall processing efficiency by using additional mechanisms like local cache memory, pipelining and parallel processing.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.