Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 290

Liczba wyników na stronie
first rewind previous Strona / 15 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  przetwarzanie obrazu
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 15 next fast forward last
EN
The development of modern surveying methods, particularly, Terrestrial Laser Scanning (TLS), has found wide application in protecting and monitoring engineering and objects and sites of cultural heritage. For this reason, it is crucial that several factors affecting the correctness of point cloud registration are considered, including the correctness of the distribution of control points (both signalised and natural), the quality of the process, and robustness analysis. The aim of this article is to evaluate the quality and correctness of TLS registration based on point clouds converted to raster form (in spherical mapping) and hand-crafted detectors. The expanded Structure-from-Motion (SfM) was used to detect the tie points for TLS registration and reliability assessment. The results demonstrated that affine detectors are useful in detecting a high number of key points (increased for point detectors by 8-12 times and for blob detectors by about 10-24 times), improving the quality and TLS registration completeness. For the registration accuracy of point cloud on signalised check points, the lower values can be noted for maximum RMSE errors for blob affine detectors than detectors and larger values for corner detectors and affine detectors (not more than 4 mm in the extreme cases, typically 2 mm). The commonly-applied target-based registration method yields similar results (differences do not exceed - in extreme cases - 3.5 mm, typically less than 2 mm), proving that using affine detectors in the TLS registration process is and reasonable and can be recommended.
EN
The article explores how qualitative image analysis impacts the process of image interpretation, particularly in composite microstructure analysis. It highlights the importance of high-quality images for accurate computer-based object detection, emphasizing the limitations of rigid pixel-based rules compared to human visual perception. The study underscores the need for optimal imaging conditions to avoid image defects that hinder precise computational analyses in scientific and industrial applications
EN
Smart farming has become a cutting-edge technology to address contemporary issues related to agricultural sustainability. Machine learning (ML) is the engine that powers this evolving technology. The study aims to develop a smart prototype robot to diagnose citrus trees (healthy or infected) using a convolutional neural network (CNN) algorithm. The results of the classification accuracy were 96%. And then, after spraying the affected areas with the pesticide, all farmers in the country can use it to protect themselves from the dangers of pesticides. The results were good and promising.
PL
Inteligentne rolnictwo stało się najnowocześniejszą technologią rozwiązującą współczesne problemy związane ze zrównoważonym rolnictwem. Uczenie maszynowe (ML) to silnik napędzający tę rozwijającą się technologię. Badanie ma na celu opracowanie inteligentnego prototypu robota do diagnozowania drzew cytrusowych (zdrowych lub zainfekowanych) za pomocą algorytmu konwolucyjnej sieci neuronowej (CNN). Wyniki trafności klasyfikacji wyniosły 96%. Następnie, po spryskaniu dotkniętych obszarów pestycydami, wszyscy rolnicy w kraju mogą go użyć do ochrony przed niebezpieczeństwami związanymi z pestycydami. Wyniki były dobre i obiecujące.
4
Content available remote Analysis recognition of Ghost Pepper and Cili-Padi using Mask RCNN and YOLO
EN
Fruit harvesting robots have made headlines in the agricultural industry in recent years. A fruit recognition system would assist farmers or agricultural industry practitioners in lessening workloads while increasing crop yields. Due to the similar characteristics of chili fruits, approximating the chili according to their grades and identifying its maturity will be difficult. Furthermore, because of their different appearances and sizes, distinguishing between the fruits and the leaves becomes difficult. As a result, a real-time object detection algorithm called You Only Look Once (YOLO) and Mask-RCNN is investigates in order to distinguish the fruit from its plant based on its shape and colour. YOLO version 5 (YOLOv5) uses to define and distinguish the chili fruits and its leaves based on two characteristics; shape and colour. The CSPDarknet network serves as the backbone in YOLOv5, where feature extraction and mosaic augmentation has used to combine multiple images into a single image. Total 391 images has divided into two subsets: training and testing, with an 80:20 ratio. YoLov5 is notable for its ability to detect small objects with high precision in a short amount of time while Mask-RCNN has proven its ability to recognize a chili fruits with high precision above 90%. The classification is evaluated using precision, recall, loss function, and inference time.
PL
Roboty do zbioru owoców trafiły w ostatnich latach na pierwsze strony gazet w branży rolniczej. System rozpoznawania owoców pomógłby rolnikom lub praktykom z branży rolniczej w zmniejszeniu obciążenia pracą przy jednoczesnym zwiększeniu plonów. Ze względu na podobne cechy owoców chili przybliżenie chili według ich klas i określenie stopnia dojrzałości będzie trudne. Ponadto, ze względu na ich różny wygląd i rozmiary, odróżnienie owoców od liści staje się trudne. W rezultacie algorytm wykrywania obiektów w czasie rzeczywistym o nazwie You Only Look Once (YOLO) i Mask-RCNN jest badany w celu odróżnienia owocu od rośliny na podstawie jego kształtu i koloru. YOLO wersja 5 (YOLOv5) służy do definiowania i rozróżniania owoców chili i ich liści w oparciu o dwie cechy; kształt i kolor. Sieć CSPDarknet służy jako szkielet w YOLOv5, w którym wyodrębnianie cech i rozszerzanie mozaiki wykorzystano do łączenia wielu obrazów w jeden obraz. Łącznie 391 obrazów zostało podzielonych na dwa podzbiory: treningowe i testowe, ze stosunkiem 80:20. YoLov5 wyróżnia się zdolnością do wykrywania małych obiektów z dużą precyzją w krótkim czasie, podczas gdy Mask-RCNN udowodnił swoją zdolność rozpoznawania owoców chili z wysoką precyzją powyżej 90%. Klasyfikacja jest oceniana za pomocą precyzji, pamięci, funkcji utraty i czasu wnioskowania.
EN
The article aims to study the multi-level segmentation process of images of arbitrary configuration and placement based on features of spatial connectivity. Existing image processing algorithms are analyzed, and their advantages and disadvantages are determined. A method of organizing the process of segmentation of multi-gradation halftone images is developed and an algorithm of actions according to the described method is given.
PL
Artykuł ma na celu zbadanie procesu wielopoziomowego segmentacji obrazów o dowolnej konfiguracji i rozmieszczeniu w oparciu o cechy łączności przestrzennej. Przeanalizowano istniejące algorytmy przetwarzania obrazu oraz określono ich zalety i wady. Opracowano metodę organizacji procesu segmentacji wielogradacyjnych obrazów półtonowych i przedstawiono algorytm działań zgodnie z opisaną metodą.
EN
Bone fractures break bone continuity. Impact or stress causes numerous bone fractures. Fracture misdiagnosis is the most frequent mistake in emergency rooms, resulting in treatment delays and permanent impairment. According to the Indian population studies, fractures are becoming more common. In the last three decades, there has been a growth of 480 000, and by 2022, it will surpass 600 000. Classifying X-rays may be challenging, particularly in an emergency room when one must act quickly. Deep learning techniques have recently become more popular for image categorization. Deep neural networks (DNNs) can classify images and solve challenging problems. This research aims to build and evaluate a deep learning system for fracture identification and bone fracture classification (BFC). This work proposes an image-processing system that can identify bone fractures using X-rays. Images from the dataset are pre-processed, enhanced, and extracted. Then, DNN classifiers ResNeXt101, InceptionResNetV2, Xception, and NASNetLarge separate the images into the ones with unfractured and fractured bones (normal, oblique, spiral, comminuted, impacted, transverse, and greenstick). The most accurate model is InceptionResNetV2, with an accuracy of 94.58%.
EN
Finger tapping is one of the standard tests for Parkinson's disease diagnosis performed to assess the motor function of patients' upper limbs. In clinical practice, the assessment of the patient's ability to perform the test is carried out visually and largely depends on the experience of clinicians. This article presents the results of research devoted to the objectification of this test. The methodology was based on the proposed measurement method consisting in frame processing of the video stream recorded during the test to determine the time series representing the distance between the index finger and the thumb. Analysis of the resulting signals was carried out in order to determine the characteristic features that were then used in the process of distinguishing patients with Parkinson's disease from healthy cases using methods of machine learning. The research was conducted with the participation of 21 patients with Parkinson's disease and 21 healthy subjects. The results indicate that it is possible to obtain the sensitivity and specificity of the proposed method at the level of approx. 80 %. However, the patients were in the so-called ON phase when symptoms are reduced due to medication, which was a much greater challenge compared to analyzing signals with clearly visible symptoms as reported in related works.
EN
In this paper, the climate and environmental datasets were processed by the scripts of Generic Mapping Tools (GMT) and R to evaluate changes in climate parameters, vegetation patters and land cover types in Burkina Faso. Located in the southern Sahel zone, Burkina Faso experiences one of the most extreme climatic hazards in sub-saharan Africa varying from the extreme floods in Volta River Basin, to desertification and recurrent droughts.. The data include the TerraClimate dataset and satellite images Landsat 8-9 Operational Land Imager (OLI) and Thermal Infrared (TIRS) C2 L1. The dynamics of target climate characteristics of Burkina Faso was visualised for 2013-2022 using remote sensing data. To evaluate the environmental dynamics the TerraClimate data were used for visualizing key climate parameter: extreme temperatures, precipitation, soil moisture, downward surface shortwave radiation, vapour pressure deficit and anomaly. The Palmer Drought Severity Index (PDSI) was modelled over the study area to estimate soil water balance related to the soil moisture conditions as a prerequisites for vegetation growth. The land cover types were mapped using the k-means clustering by R. Two vegetation indices were computed to evaluate the changes in vegetation patterns over recent decade. These included the Normalized Difference Vegetation Index (NDVI) and the Soil-Adjusted Vegetation Index (SAVI) The scripts used for cartographic workflow are presented and discussed. This study contributes to the environmental mapping of Burkina Faso with aim to highlight the links between the climate processes and vegetation dynamics in West Africa.
9
Content available remote Parameters evaluation of cameras in embedded systems
EN
The article presents a comparison of micro cameras for video data acquisition. The tested cameras can be used in conjunction with embedded systems, in particular in the system for detecting mechanical damage of airport lamps. The work verified the compatibility of operation with microcomputers: Raspberry Pi 4B, Google Coral, NVIDIA Jetson Nano and NVIDIA Jetson Xavier AGX and cameras: Raspberry Pi Camera HD v2, Waveshare 16579, IMX477 and Logitech C922. Tests were performed under laboratory conditions based on an ISO 12233 standard test chart.
PL
W artykule przedstawiono porównanie mikrokamer do akwizycji danych wizyjnych. Testowane kamery mogą zostać użyte w połączeniu z systemami wbudowanymi, w szczególności w systemie do wykrywania uszkodzeń mechanicznych lamp lotniskowych. W pracy sprawdzono kompatybilność działania z mikrokomputerami: Raspberry Pi 4B, Google Coral, NVIDIA Jetson Nano i NVIDIA Jetson Xavier AGX oraz kamery: Raspberry Pi Camera HD v2, Waveshare 16579, IMX477 i Logitech C922. Testy przeprowadzono w warunkach laboratoryjnych, w oparciu o standardową tablicę testową ISO 12233.
10
Content available remote Możliwości przetwarzania sekwencji wizyjnych w systemach wbudowanych
PL
W artykule przedstawiono wyniki badań eksperymentalnych procesu segmentacji sekwencji wizyjnych z wykorzystaniem systemów wbudowanych. Przetestowano wydajność rozwiązań opartych o mikrokomputer Raspberry Pi 4B oraz platformę Nvidia Jetson Nano pod kątem możliwości ich implementacji w platformie pomiarowej do automatycznego badania jakości działania lamp lotniskowych. Porównano szybkość przetwarzania dla różnych rozdzielczości obrazu oraz wymagania związane z zasilaniem modułów.
EN
The article presents the results of experimental research on the video segmentation process using two different embedded systems. The performance of solutions based on the Raspberry Pi 4B microcomputer and the Nvidia Jetson Nano platform was tested for the possibility of their implementation in a measurement platform for automatic testing of the quality of airport lamps. The processing speed for different image resolutions and the module power requirements were compared.
11
PL
Artykuł przedstawia różne sposoby otrzymywania warstw aktywnych w organicznych ogniwach słonecznych oraz metody detekcji ich defektów strukturalnych. Przedstawiono metody optyczne, gdzie etapowo określane są defekty z różną dokładnością, wykorzystujące przetwarzanie zobrazowań w zakresie widzialnym oraz w zakresie termalnym.
EN
The article presents various methods of obtaining active layers in organic solar cells and methods of detecting their structural defects. Optical methods are presented, where defects are determined in stages with different accuracy, using image processing in the visible and thermal range.
12
EN
The article presents an overview of the thresholding algorithms. It compares the algorithms proposed by Pun, Kittler, Niblack, Huang, Rosenfeld, Remesh, Lloyd, Riddler, Otsu, Yanni, Kapur and Jawahar. Additionally, it was tested how the tuning of the Pun, Jawahar and Niblack methods affects the thresholding efficiency and proposed a combination of the Pun algorithm with a priori algorithm. All presented algorithms have been implemented and tested for effectiveness in detecting overhead lines.
PL
W artykule przedstawiono przegląd algorytmów progowania. Porównano w nim algorytmy zaproponowane przez Puna, Kittlera, Niblacka, Huanga, Rosenfelda, Remesha, Lloydaa, Riddlera, Otsu, Yanni, Kapura i Jawahara. Dodatkowo przetestowano, jak dostrajanie metod Puna, Jawahara i Niblacka wpływa na skuteczność progowania oraz zaproponowano połączenie algorytmu Puna z algorytmem a priori. Wszystkie przedstawione algorytmy zostały zaimplementowane i przetestowane pod kątem skuteczności w wykrywaniu linii napowietrznych
13
Content available remote Automatics detect and Shooter Robot based on object detection using camera
EN
Detection color and shape has been widely developed. The target detection and follower robot system has been developed by previous researchers, and in the military world there are still few who develop automatic shooting robots. From it researchers create a robot can detect and shoot targets automatic, where the camera is used to detect the target, while the navigation system uses the PID method. The robot works when it receives a command from the user to search for a predetermined target, the camera will capture, and image will processed in a mini PC to get conformity. After that, the robot will adjust the robot's position to the target, and robot moved closer to the target robot will stop and the system will shoot the target automatically. From the results of the research that has been obtained, the PID value settings that are most suitable for the system are the values of Kp 10, ki 0.9, and kd 0.5. The overall system test got a success rate of 83.3%, with the fastest time to find and shoot targets is 17 seconds. It is hoped that this research can help the military field in implementing an automatic target shooter system.
PL
Kolor i kształt wykrywania zostały szeroko rozwinięte. System robotów do wykrywania celów i śledzenia został opracowany przez poprzednich badaczy, a w świecie wojskowym wciąż niewiele osób opracowuje automatyczne roboty strzelające. Od niego naukowcy tworzą robota, który może wykrywać i strzelać do celów automatycznie, gdzie kamera służy do wykrywania celu, podczas gdy system nawigacji wykorzystuje metodę PID. Robot działa, gdy otrzyma polecenie od użytkownika, aby wyszukać z góry określony cel, kamera przechwyci, a obraz zostanie przetworzony na mini PC w celu uzyskania zgodności. Następnie robot dostosuje pozycję robota do celu, a robot zbliżony do celu zatrzyma się, a system automatycznie wystrzeli w cel. Z uzyskanych wyników badań wynika, że najbardziej odpowiednimi dla układu nastawami wartości PID są wartości Kp 10, ki 0,9 i kd 0,5. Ogólny test systemu uzyskał wynik 83,3%, a najkrótszy czas na znalezienie i strzelenie do celu to 17 sekund. Mamy nadzieję, że badania te mogą pomóc wojsku we wdrożeniu automatycznego systemu strzelania do celu.
PL
Coraz częściej materiał dowodowy dotyczący wypadku drogowego zawiera filmy z kamer, które stanowią przydatny materiał w pracy biegłego. Analiza zapisanego na nich przebiegu zdarzenia jest najbardziej wiarygodną podstawą odtworzenia wypadku w tym prędkości i działań podejmowanych przez jego uczestników. Efektywne wykorzystanie nagrania wideo do analizy wypadku drogowego wymaga zwykle przeprowadzenia choćby jego wstępnej obróbki, np. wykadrowania i usunięcia niepotrzebnych fragmentów. W celu polepszenia jakości źródłowego materiału wideo w zakresie odwzorowania cech geometrycznych obiektów nagranych na filmie, potrzebne jest dokonanie korekty dystorsji, a następnie ortorektyfikacji. Często niezbędne jest także polepszenie obrazu, poprzez nakładanie na zapis wideo filtrów. Operacje te są możliwe do realizacji z użyciem programu PHOTORECT 2.0, co omówiono w artykule. Przekształcony film wideo, pozbawiony efektu perspektywy, odpowiednio wykadrowany oraz poddany ekstrakcji klatek użyto do przygotowania komputerowej symulacji zdarzenia, zsynchronizowanej z tym filmem.
EN
Increasingly, evidence of a road accident includes video recordings that provide useful material in the work of an expert w itness. The analysis of the recorded course of the event is the most reliable basis for accident reconstruction, including the speed and actions taken by the participants. An effective use of a video recording for a road accident analysis usually requires at least its initial processing, e.g. framing and removing unnecessary parts. In order to improve the quality of the source video material in terms of mapping the geometric features of the objects recorded on the film, it is necessary to correct the distortion, followed by orthorectification. It is also often necessary to improve the image by applying filters to the video recording. These operations are possible with the use of Photorect 2.0, which is discussed in the article. The transformed video, post-processed by means of perspective effect elimination, cropping and frame extraction was used for the preparation of a computer simulation of the event, synchronized with this film.
EN
Among many important functions, bees play a key role in food production. Unfortunately, worldwide bee populations have been decreasing since 2007. One reason for the decrease of adult worker bees is varroosis, a parasitic disease caused by the Varroa destructor (V. destructor) mite. Varroosis can be quickly eliminated from beehives once detected. However, this requires them to be monitored continuously during periods of bee activity to ensure that V. destructor mites are detected before they spread and infest the entire beehive. To this end, the use of Internet of things (IoT) devices can significantly increase detection speed. Comprehensive solutions are required that can cover entire apiaries and prevent the disease from spreading between hives and apiaries. In this paper, we present a solution for global monitoring of apiaries and the detection of V. destructor mites in beehives. Our solution captures and processes video streams from camera-based IoT devices, analyzes those streams using edge computing, and constructs a global collection of cases within the cloud. We have designed an IoT device that monitors bees and detects V. destructor infestation via video stream analysis on a GPU-accelerated Nvidia Jetson Nano. Experimental results show that the detection process can be run in real time while maintaining similar efficacy to alternative approaches.
EN
In this paper, we will show the capabilities and limitations of Alsat-2 images in mapping urban areas in emergency situation. The aim of the research was to provide urban information that is geo-referenced in real time during natural disasters (floods, earthquakes). It’s important for fast decision-making so that they will be a necessary support for the estimation of the damages. The following study tests the spatial and radiometric quality of Alsat 2-A images and proposes technical solutions for theiruse in urban mapping. In order to identify and extract the ground realities, we shall describe and make an effort to discern the perceptible aspects of features in urban area. The adopted methodology carries out a statistical analysis of the information extracted from Alsat-2 images of the studied area (the city of M’Sila, Algeria) using classification and segmentation methods. The statistics will show the percentage of the area in relation to the total size of geometric surface and the distance for linear objects. As a result, the quality of the extracted urban texture necessary for urban mapping will be determined. Image processing to improve resolution quality was carried out using merging method. However, the analysis of consistency and discrepancy of these statistics will be done by comparing samples of field data using confusion matrix.
17
Content available Image analysis framework for hydraulic mixing
EN
This study is focused on the image analysis of motionless hydraulic mixing process, for which pressure changes were the driving force. To improve the understanding of hydraulic mixing, mixing efficiency was assessed with dye introduction, which resulted in certain challenges. In order to overcome them, the framework and methodology consisting of three main steps were proposed and applied to an experimental case study. The experiments were recorded using a camera and then processed according to the proposed framework and methodology. The main outputs from the methodology which were based only on the recorded movie were liquid heights and colour changes during the process time. In addition, considerable attention has also been given to issues related to other colour systems and the hydrodynamic description of the process.
EN
Discrete two-dimensional orthogonal wavelet transforms find applications in many areas of analysis and processing of digital images. In a typical scenario the separability of two-dimensional wavelet transforms is assumed and all calculations follow the row-column approach using one-dimensional transforms. For the calculation of one-dimensional transforms the lattice structures, which can be characterized by high computational efficiency and non-redundant parametrization, are often used. In this paper we show that the row-column approach can be excessive in the number of multiplications and rotations. Moreover, we propose the novel approach based on natively two-dimensional base operators which allows for significant reduction in the number of elementary operations, i.e., more than twofold reduction in the number of multiplications and fourfold reduction of rotations. The additional computational costs that arise instead include an increase in the number of additions, and introduction of bit-shift operations. It should be noted, that such operations are significantly less demanding in hardware realizations than multiplications and rotations. The performed experimental analysis proves the practical effectiveness of the proposed approach.
EN
The research investigates the possibility of applying Sentinel-2, PlanetScope satellite imageries, and LiDAR data for automation of land cover mapping and 3D vegetation characteristics in post-agricultural areas, mainly in the aspect of detection and monitoring of the secondary forest succession. The study was performed for the tested area in the Biskupice district (South of Poland), as an example of an uncontrolled forest succession process occurring on post-agricultural lands. The areas of interest were parcels where agricultural use has been abandoned and forest succession has progressed. This paper indicates the possibility of automating the process of monitoring wooded and shrubby areas developing in post-agricultural areas with the help of modern geodata and geoinformation methods. It was verified whether the processing of Sentinel-2, PlanetScope imageries allows for reliable land cover classification as an identification forest succession area. The airborne laser scanning (ALS) data were used for deriving detailed information about the forest succession process. Using the ALS point clouds vegetation parameters i.e., height and canopy cover were determined and presented as raster maps, histograms, or profiles. In the presented study Sentinel-2, PlanetScope imageries, and ALS data processing showed a significant differentiation of the spatial structure of vegetation. These differences are visible in the surface size (2D) and the vertical vegetation structure (3D).
20
Content available remote Tracking the transport of pollutants by means of imaging methods
EN
A method for identification and tracking of pollutants plumes in water is presented and applied to laboratory data. This method uses the intensity threshold and associated image processing algorithms to identify the pollutant’s plume within a footage. Quantitative geometrical parameters are then extracted on each frame as proxies of the turbulent diffusion (i.e. area and perimeter) and advection (i.e. centroid location). From the determined plume location in each frame, it is then possible to devise a tracking algorithm which can determine the trajectory and eventual fate of the plume. The developed method is applied to two different types of plumes: one generated by a liquid pollutant (rhodamine) and another by a granular matrix type material (coal) to compare its capability of tracking different plumes. Although developed with laboratory images, the presented method is general and can be applied to field images as well. The advantages and limitations of the proposed methodology are also discussed.
first rewind previous Strona / 15 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.