ADAS (Advanced Driver Assistance Systems) plays an important role in building a safe and modern traffic system. For these systems, precise detection performance and response speed are critical. However, the detection of mobile vehicles is facing many difficulties due to the density of vehicles, the complex background scene in the city, etc. In addition, the detection and identification requirements respond in real time is also a challenge for current systems. This paper proposes a model using deep learning algorithms and artificial intelligence to increase accuracy and improve response speed for intelligent driving assistance systems. Accordingly, this paper proposes the YOLO (You Only Look One) model together with a sample data set collected and classified separately suitable for Vietnam traffic and our training algorithm. The experimental results were then performed on an NVIDIA Jetson TX2 embedded computer. The experimental results show that, the proposed method has increased the speed by at least 1.5 times with the detection rate reaching 79\% for the static camera system; and speed up at least 1.5x with a detection rate of 89\% for the dynamic camera system at 1280x720px high resolution images.
Object detection, a key application of machine learning in image processing, has achieved significant success thanks to advances in deep learning [6]. In this paper, we focus on analysing the vulnerability of one of the leading object detection models, YOLOv5x [14], to adversarial attacks using specially designed interference known as „adversarial patches” [4]. These disturbances, while often visible, have the ability to confuse the model, which can have serious consequences in real world applications. We present a methodology for generating these interferences using various techniques and algorithms, and we analyse their effectiveness in various conditions. In addition, we discuss potential defences against these types of attacks and emphasise the importance of security research in the context of the growing popularity of ML technology [13]. Our results indicate the need for further research in this area, bearing in mind the evolution of adversarial attacks and their impact on the future of ML technology.
PL
Wykrywanie obiektów to kluczowe zastosowanie algorytmów uczenia maszynowego w przetwarzaniu obrazu, które odniosło znaczący sukces dzięki postępom w głębokim uczeniu. W artykule przedstawiono analizę podatności jednego z wiodących modeli wykrywania obiektów, YOLOv5x, na ataki z wykorzystaniem specjalnie zaprojektowanych zakłóceń, znanych jako antagonistyczne wstawki. Omówiono metodę generowania antagonistycznych wstawek z wykorzystaniem różnych algorytmów i ich skuteczność w różnych warunkach. Ponadto przedstawiono potencjalne mechanizmy obronne przed tego typu atakami. Uzyskane wyniki wskazują na potrzebę dalszych badań w tym obszarze, w szczególności biorąc pod uwagę rozwój obszaru antagonistycznego uczenia maszynowego.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.