Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
PL
W artykule zaprezentowane zostały wybrane wyniki badań nad przestrzenią średniowiecznego i wczesnonowożytnego Chrzanowa. Analizy zostały przeprowadzone w oparciu o metodę socjotopograficzną. Jej zastosowanie pozwoliło na weryfikację wybranych funkcjonujących w literaturze tez dotyczących etapów formowania się miasta lokacyjnego oraz wysunięcie nowych spostrzeżeń w odniesieniu do przebiegu i zagospodarowywania przestrzeni miejskiej. Udało się określić prawdopodobne, pierwotne wymiary działki pełnokuryjnej oraz ich liczbę w poszczególnych pierzejach przyrynkowych u schyłku średniowiecza. Uściślono także przebieg linii obrony na odcinku południowym i południowo-wschodnim. Artykuł ma na celu zwrócić uwagę na potrzebę wykorzystania źródeł różnej proweniencji do badań nad przestrzenią miast doby przedrozbiorowej.
EN
The article presents selected results of the research on the space of medieval and early-modern Chrzanow. The analyses were conducted on the basis of the socio-topographic method. Its application allowed for verifying selected theses, functioning in literature, concerning formation stages of the chartered town, and making new observations relating to the process and development of urban space. It was possible to determine the probable, original dimensions of a fullcurial plot, as well as their number in particular market frontages towards the end of the Middle Ages. The defensive outline in the south and south-east section was also made more specific. The article is meant to draw attention to the need for using sources of various provenances in the research on urban space during the pre-partition period.
EN
Automatic text categorization presents many difficulties. Modern algorithms are getting better in extracting meaningful information from human language. However, they often significantly increase complexity of computations. This increased demand for computational capabilities can be facilitated by the usage of hardware accelerators like general purpose graphic cards. In this paper we present a full processing flow for document categorization system. Gram-Schmidt process signatures calculation up to 12 fold decrease in computing time of system components.
EN
The presented paper is concerned with feature space derivation through feature selection. The selection is performed on results of kernel Principal Component Analysis (kPCA) of input data samples. Several criteria that drive feature selection process are introduced and their performance is assessed and compared against the reference approach, which is a combination of kPCA and most expressive feature reordering based on the Fisher linear discriminant criterion. It has been shown that some of the proposed modifications result in generating feature spaces with noticeably better (at the level of approximately 4%) class discrimination properties.
PL
W ramach praca przeprowadzona została analiza możliwości wykorzystania algorytmu winnowing do strumieniowego przetwarzania informacji tekstowej. W szczególności nacisk został położony na operacje generacji odcisku jako jej zredukowanej reprezentacji wiadomości tekstowej. Autorzy przeprowadzili szereg eksperymentów, w celu określenia efektywności działania algorytmu oraz możliwego do uzyskania przyspieszenia obliczeń, z wykorzy-staniem węzła procesorów Intel Xeon E5645 2.40GHz oraz karty GPU Nvidia Tesla m2090.
EN
There are several models available for information retrieval and text analysis but the two are considered to be the dominant ones, namely Boolean and the vector space model (VSM). A model maps the existing words or text into a new representation space. This paper presents a boolean n-gram-based algorithm - winnowing for fast text search and comparison of documents with main focus on its implementation and performance analysis. The algorithm is used to generate fingerprints (i.e. a set of hashes) of the analyzed documents. A dedicated test framework was designed and implemented to handle the task of the algorithm evaluation which utilizes PAN test corpus and programming environment. Several tests were conducted in order to determine the comparison quality of the obfuscated and not obfuscated text for the winnowing algorithm and different window and n-gram size. The tests revealed interesting properties of the algorithms with respect to comparison of documents as well as defied the limits of their applicability. The n-gram-based algorithms due to their simplicity are well suited for hardware implementation. Thus, the authors implemented compu-tationally demanding part of both fingerprint generation both on CPU and GPU. Performance measurements for Intel Xeon E5645, 2.40GHz and Nvidia Tesla m2090 implementation of Ngram-based algorithm show approximately 14x computational speedup.
EN
Sorting is a common problem in computer science. There are a lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, many-core and multi-core platforms have enabled the creation of wide parallel algorithms. We have standard processors that consist of multiple cores and hardware accelerators, like the GPU. Graphic cards, with their parallel architecture, provide new opportunities to speed up many algorithms. In this paper, we describe the results from the implementation of a few different parallel sorting algorithms on GPU cards and multi-core processors. Then, a hybrid algorithm will be presented, consisting of parts executed on both platforms (a standard CPU and GPU). In recent literature about the implementation of sorting algorithms in the GPU, a fair comparison between many core and multi-core platforms is lacking. In most cases, these describe the resulting time of sorting algorithm executions on the GPU platform and a single CPU core.
PL
Prezentowane w pracy badania dotyczą segmentacji obrazów metodą wektorów wspierających (ang. Support Vector Machine - SVM). Metoda ta opiera się na grupie kilkunastu wektorów wspierających, które posiadają cechy wybranych obiektów w obrazie. Implementacja przedstawionej procedury klasyfikacji wektorów wspierających została wykona zarówno programowo w języku C++ na procesorze ogólnego przeznaczenia AMD AthlonII P320 Dual-Core2.10 GHz, jak i sprzętowo w języku VHDL. Moduł klasyfikacji wektorów wspierających został zaimplementowany w układzie Xilinx Spartan 6.
EN
The paper presents preliminary implementation results of image segmentation for the SVM (Support Vector Machine) algorithm. SVM is a dedicated mathematical formula which allows extracting selective objects from an input picture and assign them to an appropriate class. Consequently, a black and white images reflecting occurrence of the desired feature are derived from an original picture fed into the classifier. This work is primarily focused on the FPGA implementation aspects of the algorithm as well as on comparison of the hardware and software performance. A human skin classifier was used as an example and implemented both in AMD AthlonII P320 Dual-Core2.10 GHz and Xilinx Spartan 6 FPGA. It is worth emphasizing that the critical hardware components were designed using HDL, whereas the less demanding standard ones such as communication interfaces, FIFO, FSMs were implemented in HLL (High Level Language). Such an approach allowed both shortening the design time and preserving high performance of the hardware classification module. This work is a part of the Synat project embracing several initiatives aiming at creation of a repository of images to which are to be assigned descriptive name according to their contents. Such a database of tagged images will significantly reduce the search time, since only picture tags will be processed instead of images, so the process will involve simple string operations rather than image recognition. The project is a huge challenge due to an immense volume of data collected over the past years denoted today as the Internet resources. Therefore, the core part of the undertaking is to design andimplement a classification system which should be both reliable and fast. In order to achieve the high performance of a search engine, the most computationally intensive operations are to be ported to hardware.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.