Although object detection has achieved great success in the field of computer vision in the past few years, the performance of detecting small objects has not yet achieved ideal results. For instance, UAV aerial photography object detection plays an important role in traffic monitoring and other fields, but it faces some great challenges. The objects in aerial images are mainly small objects, the resolution of whom is low and the feature expression ability of whom is very weak. Information will be lost in high-dimensional feature maps, and this information is very important for the classification and positioning of small objects. The most common way to improve small object detection accuracy is to use high-resolution images, but this incurs additional computational costs. To address the above-mentioned problems, this article proposes a new model SINextNet, which uses a new dilated convolution module SINext block. This module is based on depth-separable convolution, and can improve the receptive field of the model. While extracting small object features, it can combine small object features with background information, greatly improving the feature expression ability of small objects. The experimental results indicate that the method proposed in this paper can achieve advanced performance across multiple aerial datasets.
Self-supervised monocular depth estimation has been widely applied in autonomous driving and automated guided vehicles. It offers the advantages of low cost and extended effective distance compared with alternative methods. However, like automated guided vehicles, devices with limited computing resources struggle to leverage state-of-the-art large model structures. In recent years, researchers have acknowledged this issue and endeavored to reduce model size. Model lightweight techniques aim to decrease the number of parameters while maintaining satisfactory performance. In this paper, to enhance the model’s performance in lightweight scenarios, a novel approach to encompassing three key aspects is proposed: (1) utilizing LeakyReLU to involve more neurons in manifold representation; (2) employing large convolution for improved recognition of edges in lightweight models; (3) applying channel grouping and shuffling to maximize the model efficiency. Experimental results demonstrate that our proposed method achieves satisfactory outcomes on KITTI and Make3D benchmarks while having only 1.6M trainable parameters, representing a reduction of 27% compared with the previous smallest model, Lite-Mono-tiny, in monocular depth estimation.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.