Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In a world in which biometric systems are used more and more often within our surroundings while the number of publications related to this topic grows, the issue of access to databases containing information that can be used by creators of such systems becomes important. These types of databases, compiled as a result of research conducted by leading centres, are made available to people who are interested in them. However, the potential combination of data from different centres may be problematic. The aim of the present work is the verification of whether the utilisation of the same research procedure in studies carried out on research groups having similar characteristics but at two different centres will result in databases that may be used to recognise a person based on Ground Reaction Forces (GRF). Studies conducted for the needs of this paper were performed at the Bialystok University of Technology (BUT) and Lublin University of Technology (LUT). In all, the study sample consisted of 366 people allowing the recording of 6,198 human gait cycles. Based on obtained GRF data, a set of features describing human gait was compiled which was then used to test a system’s ability to identify a person on its basis. The obtained percentage of correct identifications, 99.46% for BUT, 100% for LUT and 99.5% for a mixed set of data demonstrates a very high quality of features and algorithms utilised for classification. A more detailed analysis of erroneous classifications has shown that mistakes occur most often between people who were tested at the same laboratory. Completed statistical analysis of select attributes revealed that there are statistically significant differences between values attained at different laboratories.
EN
The analysis of movements is one of the notable applications within the field of computer animation. Sophisticated motion capture techniques allow to acquire motion and store it in a digital form for further analysis. The combination of these two aspects of computer vision enables the presentation of data in an accessible way for the user. The primary objective of this study is to introduce an artificial intelligence-based system for animating tennis motion capture data. The Dual Attention Graph Convolutional Network was applied. Its unique approach consists of two attention modules, one for body analysis and the other for tennis racket alignment. The input to the classifier is a sequence of three dimensional data generated from the Mocap system and containing an object of a player holding a tennis racket and presenting fundamental tennis hits, which are classified with great success, reaching a maximum accuracy over 95%. The recognised movements are further processed using dedicated software. Movement sequences are assigned to the tennis player's 3D digital model. In this way, realistic character animations are obtained, reflecting the recognised moves that can be further applied in movies, video games and other visual projects.
EN
Segmentation is one of the image processing techniques, widely used in computer vision, to extract various types of information represented as objects or areas of interest. The development of neural networks has influenced image processing techniques, including creation of new ways of image segmentation. The aim of this study is to compare classical algorithms and deep learning methods in RGB image segmentation tasks. Two hypotheses were put forward: 1) “The quality of segmentation applying deep learning methods is higher than using classical methods for RGB images”, and 2) “The increase of the RGB image resolution has positive impact on the segmentation quality”. Two traditional segmentation algorithms (Thresholding and K-means) were compared with deep learning approach (U-Net, SegNet and FCN 8) to verify RGB segmentation quality. Two resolutions of images were taken into consideration: 160x240 and 320x480 pixels. Segmentation quality for each algorithm was estimated based on four parameters: Accuracy, Precision, Recall and Sorensen-Dice ratio (Dice score). In the study the Carvana dataset, containing 5,088 high-resolution images of cars, was applied. The initial set was divided into training, validation and test subsets as 60%, 20%, 20%, respectively. As a result, the best Accuracy, Dice score and Recall for images with resolution 160x240 were obtained for U-Net, achieving 99.37%, 98.56%, and 98.93%, respectively. For the same resolution the highest Precision 98.19% was obtained for FCN-8 architecture. For higher resolution, 320x480, the best mean Accuracy, Dice score, and Precision were obtained for FCN-8 network, reaching 99.55%, 99.95% and 98.85%, respectively. The highest results for classical methods were obtained for Threshold algorithm reaching 80.41% Accuracy, 58.49% Dice score, 67.32% Recall and 52.62% Precision. The results confirm both hypotheses.
PL
Przedmiotem tej pracy jest analiza porównawcza trzech narzędzi do orkiestracji kontenerów aplikacyjnych: Kubernetes 1.2.2, Docker Swarm 1.24 oraz Nomad Hashicorp 1.2.0. Zaimplementowano w tym celu aplikację, odpowiadającąna żądania, następnie skonteryzowano ją używając technologii Docker. Dla każdego z narzędzi powtórzono trzykrotnie scenariusz, który na celu miał zmierzenie czasu startupodów. Równocześnie z badaniem czasu startu przeprowadzono badanie dotyczące obciążenia podzespołów. W porównaniach uwzględniono też czas regeneracji repliki. Ostatnim doświadczeniem było zbadanie mechanizmów równoważenia obciążenia. Z przeprowadzonych analiz wynika, że Docker Swarm pod względem dużej części kryteriów rozpatrywanych w tej pracy okazał się najlepszym narzędziem orkiestracyjnym.
EN
The aim of the work is comparative analysis of three tools for application container’s orchestration: Kubernetes 1.2.2, Docker Swarm 1.24 and Nomad Hashicorp 1.2.0. For this purpose, test application was implemented, respondingrequests, then it was contenerized using Docker. For each tool, the scenario aimed at measuring pods startup time. The research was repeated three times. During each repetition number of replics were increased. Simultaneously with startup time test, CPU load and memory strain were measured. In comparison also time of regeneration was taken into consideration, what was realized by gauging time of response for GET request. The analysis showed that Docker Swarm in terms of most of the criteria examined in this work turned out as the best orchestration tool.
PL
W pracy przeprowadzono analizę porównawczą dwóch najpopularniejszych usług strumieniowego przesyłania danych:Apache Kafka oraz RabbitMQ. Celem było wykonanie analizy porównawczej wybranych technologii oraz określenia ich wydajności czasowej. Do badań wykorzystano cztery aplikacje (po dwie dla każdej badanej technologii) przesyłają-ce oraz odbierające wiadomości.Badania uzupełniono testami z użyciem pomocniczych narzędzi oraz teoretycznym porównaniem.Analiza porównawcza uzyskanych wyników pozwoliła wyłonić wydajniejsze rozwiązanie, którym jest Apache Kafka
EN
The article presents a comparative analysis of the two most popular message brokers: Apache Kafka and RabbitMQ. The purpose of this paper was to perform a comparative analysis of selected technologies and to determine their time efficiency. For the needs of the research four applications were prepared (two for each tested technology) that were sending and receiving messages. The research was supplemented with tests with the use of auxiliary tools and theoreti-cal comparison. The comparative analysis of gathered data allowed us to determine the most effective technology, which happened to be Apache Kafka.
EN
Universal design is a strategic approach for planning and designing both the products and their environment, aimed at making a given product available to the widest number of possible users. It ensures equality for all of them and the opportunity to participate in the society. This concept is also crucial in the process of designing and developing software. The research was conducted with the use of four services, three of them were implemented for the purpose of this study. Two of them took into consideration the principles of universal design, while the others did not. The aim of the study was verification of the level of usability and accessibility of services by means of three independent methods: the LUT (Lublin University of Technology) checklist, an assessment taking into account WCAG 2.0 (Web Content Accessibility Guidelines) standards using the automatic WAVE evaluation tool (Web Accessibility Evaluation Tool) and a device allowing to track the movement of the eye while performing various tasks on websites. The websites were assessed by twenty experts in the field of creating web application interfaces, using the LUT checklist. The time to the first fixation (TTFF) that it took respondents to look at specific website elements was measured using the eye tracker device and iMotions software. All websites were checked by means of the WAVE tool to detect irregularities and non-compliance with universal design standards. The analysis performed clearly indicated that websites that follow the universal design guidelines were more useful, intuitive and accessible for users. It might be concluded that interfaces allow to find necessary information and perform desired actions in a shorter time when prepared in accordance with the principles of universal design.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.