In military training, the shooting target reporting system faces several challenges, including difficulty grasping the target surface, low real-time judgment of bullet hole ring value, and susceptibility to external environmental factors. To address these issues, this work proposes an approach that combines deep learning target detection and semantic segmentation algorithms with the traditional YOLOv5 target detection model. The model is enhanced into a multi-task architecture, featuring a backbone network encoder and two decoders, to detect and segment the chest ring target area. To counteract the impact of external environmental interference, the work employs scale-invariant feature transform feature point matching and image correction to eliminate tilting and jittering. The TPH-YOLOv5 network framework is enhanced by integrating the Swin-transformer coding structure, enabling accurate detection of bullet holes and extraction of local and global features from the chest target image. In addition, a center distance judgment method is designed based on arc adjacency matrix ellipse detection to identify the boundary area of the tenth ring of the chest ring target. By measuring the distance from the center of the bullet hole to the center of the target, this work achieves precise judgment of the bullet hole score. The experimental results demonstrate the effectiveness of the proposed system. After 300 training iterations, the accuracy of target frame detection reaches 0.94, with a recall rate of 0.92. The precision and recall of bullet hole detection reach 0.98 and 0.97, respectively, with a frame rate of 45 fps. The shooting target reporting system developed in this study shows promising potential, as evidenced by testing and validation in a simulated real-world shooting environment. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Target detection
Detection and tracking algorithms
Education and training
Image segmentation
Chest
Feature extraction
Image processing