Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 7

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  YOLOv5
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote Detecting the usage of a mobile phone during an online test using AI technology
100%
EN
In this research, the YOLOV5 machine learning algorithm was used to detect using of the mobile phone during the electronic test. A custom dataset of phone images in different poses and orientations was created and then categorized using the Makesense webpage. The webcam on the examinee's computer captures real-time videos of the examinee and then analyzes them with the YOLOV5 algorithm. The maximum accuracy of real-time mobile phone usage detection was 92% and FAR 4%. For comparison and verification purposes, the Jetson Nano was used to detect phone usage for the same data set. The accuracy of detection with Jetson was up to 85% with 5% FAR. The two results were good and promising.
PL
Jednym z niebezpieczeństw egzaminów elektronicznych jest oszustwo przy użyciu telefonu komórkowego. W badaniach wykorzystano algorytm uczenia maszynowego YOLOV5 do wykrywania użycia telefonu komórkowego podczas testu elektronicznego. Utworzono niestandardowy zestaw danych obrazów telefonu w różnych pozach i orientacjach, a następnie skategoryzowano go za pomocą strony internetowej Makesense. Kamera internetowa na komputerze osoby badanej przechwytuje wideo osoby badanej w czasie rzeczywistym, a następnie analizuje je za pomocą algorytmu YOLOV5. Maksymalna dokładność wykrywania użycia telefonu komórkowego w czasie rzeczywistym wyniosła 92%, a FAR 4%. Do celów porównawczych i weryfikacji wykorzystano Jetson Nano do wykrywania użycia telefonu dla tego samego zestawu danych. Dokładność wykrywania za pomocą Jetson wynosiła do 85% przy 5% FAR. Oba wyniki były dobre i obiecujące.
EN
Two-dimensional human pose estimation has been widely applied in real-world applications such as sports analysis, medical fall detection, human-robot interaction, with many positive results obtained utilizing Convolutional Neural Networks (CNNs). Li et al. at CVPR 2020 proposed a study in which they achieved high accuracy in estimating 2D keypoints estimation/2D human pose estimation. However, the study performed estimation only on the cropped human image data. In this research, we propose a method for automatically detecting and estimating human poses in photos using a combination of YOLOv5 + CC (Contextual Constraints) and HRNet. Our approach inherits the speed of the YOLOv5 for detecting humans and the efficiency of the HRNet for estimating 2D keypoints/2D human pose on the images. We also performed human marking on the images by bounding boxes of the Human 3.6M dataset (Protocol #1) for human detection evaluation. Our approach obtained high detection results in the image and the processing time is 55 FPS on the Human 3.6M dataset (Protocol #1). The mean error distance is 5.14 pixels on the full size of the image (1000×1002). In particular, the average results of 2D human pose estimation/2D keypoints estimation are 94.8% of PCK and 99.2% of PDJ@0.4 (head joint). The results are available.
EN
This article explores techniques for the detection and classification of fish as an integral part of underwater environmental monitoring systems. Employing an innovative approach, the study focuses on developing real-time methods for high-precision fish detection and classification. The implementation of cutting-edge technologies, such as YOLO (You Only Look Once) V5, forms the basis for an efficient and responsive system. The study also evaluates various approaches in the context of deep learning to compare the performance and accuracy of fish detection and classification. The results of this research are expected to contribute to the development of more advanced and effective aquatic monitoring systems for understanding underwater ecosystems and conservation efforts.
PL
Niniejszy artykuł bada metody wykrywania i klasyfikacji ryb jako integralną część podwodnych systemów monitorowania środowiska. Wykorzystując innowacyjne podejście, badania koncentrują się na opracowaniu metod w czasie rzeczywistym do bardzo dokładnego wykrywania i klasyfikacji ryb. Wprowadzenie zaawansowanych technologii, takich jak YOLO (You Only Look Once) V5, stanowi podstawę wydajnego i responsywnego systemu. Badanie ocenia również różne podejścia w kontekście głębokiego uczenia się, aby porównać wydajność i dokładność wykrywania i klasyfikacji ryb. Oczekuje się, że wyniki tych badań przyczynią się do rozwoju bardziej zaawansowanych i wydajnych systemów monitorowania zbiorników wodnych w celu zrozumienia podwodnych ekosystemów i wysiłków na rzecz ochrony przyrody.
EN
The development of surveillance video vehicle detection technology in modern intelligent transportation systems is closely related to the operation and safety of highways and urban road systems. Yet, the current object detection network structure is complex, requiring a large number of parameters and calculations, so this paper proposes a lightweight network based on YOLOv5. It can be easily deployed on video surveillance equipment even with limited performance, while ensuring real-time and accurate vehicle detection. Modified MobileNetV2 is used as the backbone feature extraction network of YOLOv5, and DSC “depthwise separable convolution” is used to replace the standard convolution in the bottleneck layer structure. The lightweight YOLOv5 is evaluated in the UA-DETRAC and BDD100k datasets. Experimental results show that this method reduces the number of parameters by 95% as compared with the original YOLOv5s and achieves a good tradeoff between precision and speed.
EN
The article presents research on animal detection in thermal images using the YOLOv5 architecture. The goal of the study was to obtain a model with high performance in detecting animals in this type of images, and to see how changes in hyperparameters affect learning curves and final results. This manifested itself in testing different values of learning rate, momentum and optimizer types in relation to the model’s learning performance. Two methods of tuning hyperparameters were used in the study: grid search and evolutionary algorithms. The model was trained and tested on an in-house dataset containing images with deer and wild boars. After the experiments, the trained architecture achieved the highest score for Mean Average Precision (mAP) of 83%. These results are promising and indicate that the YOLO model can be used for automatic animal detection in various applications, such as wildlife monitoring, environmental protection or security systems.
6
Content available remote Smart vehicle height detection for limited height roads
71%
EN
Traffic congestion has become more prevalent in metropolitan areas, necessitating the reorganization of roads and their management through Computer vision technologies. One of the techniques is to determine the height vehicles allowed to use the road, and identify the license plates of vehicles an efficient traffic monitoring system has been proposed. the proposed system works by detecting objects (vehicles) and use the laws of area to calculate vehicle heights, as well as license plate detection using the yolov4 and yolov5 networks.
PL
Zatory komunikacyjne stały się bardziej powszechne w obszarach metropolitalnych, co wymaga reorganizacji dróg i zarządzania nimi za pomocą technologii wizji komputerowej. Jedną z technik jest wyznaczanie wysokości pojazdów dopuszczonych do ruchu oraz identyfikacja tablic rejestracyjnych pojazdów, zaproponowano skuteczny system monitorowania ruchu. proponowany system działa na zasadzie wykrywania obiektów (pojazdów) i wykorzystuje prawa powierzchni do obliczania wysokości pojazdów, a także wykrywanie tablic rejestracyjnych za pomocą sieci yolov4 i yolov5.
EN
The work aims to develop an algorithm for identifying objects in a forging plant under production conditions. Particular emphasis is placed on the accurate detection and tracking of forgings that are transferred along the forging line and, if possible, detection will also cover employees controlling and supporting the operation of forging machines, all of this with the use of standard vision systems. An algorithm prepared in such way will allow the performance of effective detections that will support activities related to the control of the movement of forging elements, the analysis of safety in workplaces, and the monitoring of compliance with Occupational Health and Safety Regulations by employees, as well as also allowing for the introduction of additional optimization algorithms that will further enrich the presented model, which may prove to be a long-term goal that will form the basis for subsequent work. Three algorithmic solutions with different levels of complexity were considered during the research. The first two are based on artificial neural network solutions, while the last one utilizes classical image processing algorithms. The datasets for training and validation in the former cases were generated based on the recordings taken from standard cameras located in the forging plant. Data were acquired from three cameras, two of which were used to create training and validation sets, and a third one was used to verify how the developed algorithms would work in a variable environment that was previously unknown to the models. The impact of model parameters on the results is presented at this stage of the research. It has been proven that machine learning-based solutions cope very well with object detection problems and achieve high accuracies after a precise selection of hyperparameters. Algorithms show the performance of detections with excellent accuracy of 92.5% for YOLOv5 and 94.3% for Mask R-CNN. However, a competitive solution using only image transformations without machine learning showed satisfactory results that can also be obtained with simpler approaches.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.