Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  YOLO
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
|
|
tom Vol. 23
105--111
EN
There is a great range of spectacular coral reefs in the ocean world. Unfortunately, they are in jeopardy, due to an overabundance of one specific starfish called the coral-eating crown-of-thorns starfish (or COTS). This article provides research to deliver innovation in COTS control. Using a deep learning model based on the You Only Look Once version 5 (YOLOv5) deep learning algorithm on an embedded device for COTS detection. It aids professionals in optimizing their time, resources, and enhances efficiency for the preservation of coral reefs worldwide. As a result, the performance over the algorithm was outstanding with Precision: 0.93 - Recall: 0.77 - F1score: 0.84.
EN
This paper presents a method to estimate vehicle speed automatically, including cars and motorcycles under mixed traffic conditions from video sequences acquired with stationary cameras in Hanoi City of Vietnam. The motion of the vehicle is detected and tracked along the frames of the video sequences using YOLOv4 and SORT algorithms with a custom dataset. In the method, the distance traveled by the vehicle is the length of virtual point-detectors, and the travel time of the vehicle is calculated using the movement of the centroid over the entrance and exit of virtual point-detectors (i.e., region of interest), and then the speed is also estimated based on the traveled distance and the travel time. The results of two experimental studies showed that the proposed method had small values of MAPE (within 3%), proving that the proposed method is reliable and accurate for application in real-world mixed traffic environments like Hanoi, Vietnam.
EN
Artificial Intelligence has been touted as the next big thing that is capable of altering the current landscape of the technological domain. Through the use of Artificial Intelligence and Machine Learning, pioneering work has been undertaken in the area of Visual and Object Detection. In this paper, we undertake the analysis of a Visual Assistant Application for Guiding Visually-Impaired Individuals. With recent breakthroughs in computer vision and supervised learning models, the problem at hand has been reduced significantly to the point where new models are easier to build and implement than the already existing models. Different object detection models exist now that provide object tracking and detection with great accuracy. These techniques have been widely used in automating detection tasks in different areas. A few newly discovered detection approaches, such as the YOLO (You Only Look Once) and SSD (Single Shot Detector) approaches, have proved to be consistent and quite accurate at detecting objects in real-time. This paper attempts to utilize the combination of these state-of-the-art, real-time object detection techniques to develop a good base model. This paper also implements a ’Visual Assistant’ for visually impaired people. The results obtained are improved and superior compared to existing algorithms.
EN
Paper presents microscopic studies of activated sludge supported by automatic image analysis based on deep learning neural networks. The organisms classified as Arcella vulgaris were chosen for the research. They frequently occur in the waters containing organic substances as well as WWTPs employing the activated sludge method. Usually, they can be clearly seen and counted using a standard optical microscope, as a result of their distinctive appearance, numerous population and passive behavior. Thus, these organisms constitute a viable object for detection task. Paper refers to the comparison of performance of deep learning networks namely YOLOv4 and YOLOv8, which conduct automatic image analysis of the afore-mentioned organisms. YOLO (You Only Look Once) constitutes a one-stage object detection model that look at the analyzed image once and allow real-time detection without a marked accuracy loss. The training of the applied YOLO models was carried out using sample microscopic images of activated sludge. The relevant training data set was created by manually labeling the digital images of organisms, followed by calculation and comparison of various metrics, including recall, precision, and accuracy. The architecture of the networks built for the detection task was general, which means that the structure of the layers and filters was not affected by the purpose of using the models. Accounting mentioned universal construction of the models, the results of the accuracy and quality of the classification can be considered as very good. This means that the general architecture of the YOLO networks can also be used for specific tasks such as identification of shell amoebas in activated sludge.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.