Mobile QR Code QR CODE

  1. (School of Information Engineering and Artificial Intelligence, Henan Open University, Zhengzhou Vocational University of Information and Technology, Zhengzhou 450046, China )



LabVIEW, Epidemics, Sudden events, Public security images, Image recognition

1. Introduction

In recent years, flexible geometric models and attribute assignment methods have been used in security image recognition. At present, image recognition technology is widely used in society, especially for image processing. For example, an image is analyzed based on the frame, the grayscale used, and its color histogram, texture attributes, and/or mixed attributes. The image is also displayed as a matrix of elements that corresponds to the grayscale values in the image. Based on one-valued degradation and principal component analysis, image features are extracted and classified. Because of the randomness of the angle and the positions of human bodies, descriptions of the objects in security images are different. In the anti-smuggling department, the image sequence can be analyzed according to image size, materials present, and image characteristics. With the introduction of new technology and safety control methods, three-dimensional image recognition methods are gradually developed.

With the continuous development of science and technology in the fields of image recognition and statistical learning theory, artificial intelligence and new scientific methods are used. An automatic imaging method is recommended. However, in the application of security image recognition, the positions of specific targets in security images are not clear, and different targets and boundaries are fuzzy, so it is difficult to realize automatic detection. Therefore, research on security image detection methods is the key to future security image processing. It is necessary to construct an analysis model for suspicious security images from public places in order to realize suspicious activity detection and fusion of security images from public places. Improving recognition abilities in security images from public places based on epidemics and sudden events will improve the level of security monitoring.

The extraction of suspicious features from security images during epidemics and sudden events is based on detection and cluster analysis of the suspicious information. In traditional methods, extracting such suspicious features mainly includes irregular triangulation methods, linear enhancement technology, and particle clustering analysis. In [4], a method for extracting suspicious features from public security images during epidemics and other sudden events via Laplace sharpening feature analysis was proposed. Maximum cluster variance analysis is used to establish a suspicious feature segmentation model for public security images, and image parameters can be detected and identified through the suspicious segmentation results. However, this method has poor adaptability and low detection accuracy when extracting suspicious features from public security images taken during epidemics and sudden events. In [5], a suspicious feature extraction method based on a regional growth algorithm was proposed. Fuzzy-information clustering and regional growth analysis are used to extract suspicious features from public security images captured during epidemics and sudden events, but environmental adaptability under this method is not high. The authors in [6] put forward a suspicious feature extraction method for security images based on grayscale parameter analysis combined with activities and shapes to realize feature extraction from security images taken during epidemics and sudden events by using maximum pixel recognition. The imaging level and degree of recognition from this method are not high.

To solve the above problems, this paper puts forward an image recognition algorithm based on LabVIEW for public security images, and constructs image recognition and suspicious feature extraction based on cross-regional block fusion. It constructs the model for suspicious information detection by adopting a fusion detection method for human appearance feature parameters and dynamic gaits. This paper constructs a matching model of suspicious dynamic information block features from security images, decomposes suspicious background information, and constructs an edge contour detection model for security images taken during epidemics and sudden events. According to the suspicious edge detection results from public-place security images, the spatial structure of the images is extracted, and the risk difference characteristics of human body shapes in the images are captured to enhance the detection of suspicious information. Finally, a simulation test shows that the proposed method has superior performance, improving suspicious feature extraction abilities from public-place security images based on epidemics and sudden events.

2. Suspicious Information Detection and Imaging Optimization

2.1 Detection of Suspicious Information

In order to extract suspicious features from security images taken during epidemics and sudden events, it is necessary to construct a scale segmentation model of suspicious information in security images. In this work, an image collection device with a flash detector is adopted. When a target is hit by X-rays, the flash detector will detect them passing through the target. The essence of detection is to convert X-rays into electronic signals, which facilitates the subsequent image processing. This pavilion can turn X-rays into visible rays in a short period of time. The obtained image can quickly be converted into an electronic signal; the X-rays pass through the object and are then radiated by the detector. The function of the photodiode is to convert weak optical signals into codes that are convenient for analysis and processing, so as to obtain suspicious information from public-place security images taken during epidemics and sudden events. The label distribution is generated according to three factors that are well packaged into a multi-factor distribution, thus realizing feature embedding and feature extraction from data sets of suspicious images. The spatial structure model is shown in Fig. 1.

Fig. 1. Schematic for security inspection of images from public places.
../../Resources/ieie/IEIESPC.2023.12.4.283/fig1.png

The suspicious information model for security inspection of images from public places during epidemics and sudden events is shown in Fig. 1. The edge contour detection function is obtained by using maximum pixel parameter identification, and the label identification model is expressed as follows:

(1)
$\begin{aligned} v(t+1)&=\omega v(t)+\varphi [p-x(t)]\end{aligned} $
(2)
$\begin{aligned} x(t+1)&=x(t)+v(t+1)\end{aligned} $

where $\omega $ is the time point at which the actual occurrence of a public threat is controlled, $v(t)$ is the degree of influence from the overall time distribution, $p$ is the movement in which actual public security threat Y belongs, $x(t)$ is the public security threat movement in which style I belongs, and $\varphi [p-x(t)]$ is the public security threat parameter to capture different historical background colors of public security threats. Based on the fuzzy-boundary feature detection method, through orthogonal wavelet scale decomposition, a multi-level wavelet decomposition structure model of security images taken during epidemics and sudden events is constructed as follows:

(3)
$ \succ _{ws}\left(s,\tau \right)=\left| \chi _{ws}\left(s,\tau \right)\right| $

where

(4)
$ \chi _{ws}\left(s,\tau \right)=\sqrt{s}\int _{-\infty }^{+\infty }u\left(t\right)u^{\ast }\left[s\left(t-\tau \right)\right]dt $

where $\chi _{ws}(s,\tau )$ is the loss between the actual shape and the predicted shape, $u(t)$ is the pixel error in the security images from public places taken during epidemics and sudden events, and $u^{\ast }$ is the probability of the I-th kind of suspicious category. A matching model of the suspicious dynamic information block feature of security images from public places is constructed by using fusion detection of human appearance feature parameters and dynamic gaits. The suspicious block matching model is described by formula (5):

(5)
$ x_{id}^{t+1}=wx_{id}^{t}+c_{1}r_{1}\left(p_{id}-x_{id}^{t}\right)+c_{2}r_{2}\left(p_{gd}-x_{id}^{t}\right) $

The scale spatial distribution function is constructed, and sparse background feature segmentation of public-place security images taken during epidemics and sudden events is carried out based on the matched filter detection method to obtain joint detection results of image position and scale parameters. The extracted security images are analyzed by information fusion using adaptive clustering, and the label distribution, L, derived from the background information of factors threatening public security (such as origin time, birthplace, and movement of threat factors) can assist in the visual feature learning in a convolutional neural network. The fusion clustering model is expressed as follows:

(6)
$ v\left(x\right)=g^{-1}\left[g\left(1\right)-g\left(u\left(x\right)\right)\right] $

where, $u(x)$ is the neighborhood grayscale information of security images taken during epidemics and sudden events, and $g(\cdot )$ is the rotation invariant feature quantity of the image that satisfies $g\colon [0,1]\rightarrow [0,1]$. According to the above analysis, a suspicious information detection model is constructed, and suspicious features are extracted according to the information detection results.

2.2 Image Information Enhancement

Combining background and environmental factor detection, suspicious background information feature decomposition is realized, the edge contour detection and image enhancement model is constructed, and the background suspicious principal component feature quantity is obtained as follows:

(7)
$ P\left(Y\right)=\frac{\exp \left\{-\beta \sum _{c\subset C}V_{c}\left(Y\right)\right\}}{\sum _{Y}\exp \left\{-\beta \sum _{c\subset C}V_{c}\left(Y\right)\right\}} $

where $V_{c}(Y)$ is the subspace distribution of the security inspection image, $\beta $ is the correlation map, and $\sum _{c\subset C}V_{c}(Y)$ is the template matching the key-point direction information distribution, which minimizes the loss of style and content in the final output, and obtains the factors threatening public security. The model function of the drawing style learning framework is as follows:

(8)
$ L\left(a,b_{m}\right)=\sum _{V_{m}\in P^{res}}\sum _{V_{n}\in P^{true}}\frac{V_{m}\cap V_{n}}{\left| V\right| } $

wherein $V_{m}$ is the label of the factor threatening public security, and $V_{n}$ is the characteristic detection value of the factors threatening public security appearing in adjacent order. According to the position and scale distribution of the extreme points, the pixel values of the subspace noise reduction and information enhancement output of the security image can be calculated as follows:

(9)
$ \overset{\wedge }{f}\left(x,y\right)=\left\{\begin{array}{l} g\left(x,y\right)-1,if\enspace \enspace \enspace \enspace \enspace g\left(x,y\right)-\overset{\wedge }{f_{Lee}}\left(x,y\right)\geq t\enspace \\ g\left(x,y\right)+1,if\enspace \enspace \enspace \enspace \enspace g\left(x,y\right)-\overset{\wedge }{f_{Lee}}\left(x,y\right)<t\enspace \enspace \enspace \\ g\left(x,y\right),\enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace \enspace else \end{array}\right. $

where $g(x,y)$ is the matching rate of grayscale data, $\overset{\wedge }{f_{Lee}}(x,y)$ is the frequency parameter of content feature matching, and $t$ is the sampling point. Using comprehensive feature vector fusion and difference fusion, the two domains are X and Y, respectively, and a feature detection model is constructed, which is expressed as follows:

(10)
$ n_{pq}=\frac{\mu _{pq}}{\left(\mu _{00}\right)^{\gamma }} $

where $\mu _{pq}$ is the R-layer color distribution of the image, and $\mu _{00}$ is the G-layer color distribution of the security image. Semantic segmentation of the security image is realized by using the difference mapping method, and the image information is deeply enhanced, with the output expressed as follows:

(11)
$ \overset{\wedge }{x}\left(k/k\right)=\sum _{j}^{m}\overset{\wedge }{x^{i}}\left(k/k\right)u_{j}\left(k\right) $

where $u_{j}(k)$ is a local feature, and $\overset{\wedge }{x^{i}}$ is a public security threat factor. Based on the above analysis, a noise reduction model is constructed, and a suspicious clustering model with enhanced information output is obtained, as shown in Fig. 2.

Fig. 2. Clustering model of suspicious features in security images from public places.
../../Resources/ieie/IEIESPC.2023.12.4.283/fig2.png

3. Feature Extraction Optimization

3.1 Suspicious Information Fusion

Based on suspicious edge detection, image enhancement is carried out by using a subspace noise reduction method. Combined with a cross-regional block fusion model, multi-level suspicious feature detection in security images from public places during epidemics and sudden events is constructed as follows:

(12)
$$ H(r)=\cos \left(\frac{\pi}{2} \log _2\left(\frac{2 r}{\pi}\right)\right. $$

where $r$ is the spatial envelope feature. According to cross-regional block fusion, several different types of image styles are defined, and the suspicious feature point set is obtained:

(13)
$ L=\frac{1}{\sqrt{a}}\left(\frac{a+1}{2}-\frac{\left| a-1\right| }{B/f_{0}}\right) $

where $a$ is the color feature, $f_{0}$ is the deep network feature, and $B$ is the multi-view local color component. Using semantic segmentation, a fusion multilevel feature distribution model of security images is constructed, and vector quantization coding and the cross-regional block fusion output from security images captured during epidemics and sudden events are obtained as follows:

(14)
$ GD=\left(\frac{1}{\left| PS\right| }\sum _{i=1}^{\left| PS\right| }d_{i}^{2}\right)^{\frac{1}{2}} $

where $PS$ is semantic information from the security inspection, and $d_{i}$ is the sampling interval from the security inspection. Using a fine semantic segmentation method, the grid parameter distribution of the security images is obtained as follows:

(15)
$ SP=\sqrt{\frac{1}{\left| PS\right| -1}\sum _{i=1}^{\left| PS\right| }\left(\overline{d}-d_{i}\right)^{2}} $

By using multi-scale feature decomposition, the detection of suspicious output from security images taken during epidemics and sudden events can be obtained as follows:

(16)
$ I=I\left(C^{N};D^{N}|s^{N}\right) $

where $C^{N}$ is the single-depth compression scale of fuzzy space, $D^{N}$ is the characteristic response, and $s^{N}$ is the label inherited from the original image. According to suspicious clustering segmentation, suspicious information fusion processing is carried out using multi-level visual feature analysis to improve image detection and recognition.

3.2 Feature Extraction in Security Images

Combined with the cross-regional block fusion model, the analytic rule function for gradient fusion of suspicious information from security images taken during epidemics and sudden events is constructed. Based on the cross-regional block fusion distribution and the decomposition results of suspicious features in images, the risk factors from the security images are calculated as follows:

(17)
$ p\left(x,t\right)=\lim _{\Delta x\rightarrow 0}\left[\sigma \frac{u-\left(u+\Delta u\right)}{\Delta x}\right]=-\sigma \frac{\partial u\left(x,t\right)}{\partial x} $

where $\sigma $ represents the suspicious parameter from public security image fusion, and $\Delta x$ represents the distribution set of candidate areas from public security images captured during epidemics and sudden events. Based on the shallow feature information method, the fusion method for public security images is constructed, and the filtering detection model of public security images is established. The suspicious segmentation results are as follows:

(18)
$ \begin{array}{c} P_{rk}=\left(\frac{\sum _{j=1}^{c}I_{swk}\left(1,j\right)}{c},\frac{\sum _{j=1}^{c}I_{swk}\left(2,j\right)}{c},\ldots ,\\ \frac{\sum _{j=1}^{c}I_{swk}\left(i,j\right)}{c},\ldots ,\frac{\sum _{j=1}^{c}I_{swk}\left(r,j\right)}{c}\right) \end{array} $

where c is the central pixel of the security image, and r is the feature map fusion model of the security image. The suspicious detection output from a security image can be obtained as follows:

(19)
$ d_{i+1}=2F\left(x_{i+1}+\frac{1}{2},y_{i}+2\right) $

where $x_{i+1}$ is the color difference component of the whole area in the security image, and $y_{i}$ is the local color component of the whole area in the image. Based on the cross-regional block fusion distribution and the feature decomposition results of suspicious information from the image, the grayscale edge information decomposition is adopted to obtain the image feature decomposition model as follows:

(20)
$ g\left(x,y\right)=h\left(x,y\right)*f\left(x,y\right)+\eta \left(x,y\right) $

where $h(x,y)*f(x,y)$ is the fusion parameter of the suspicious information samples based on epidemics and sudden events. The spatial structure of the images is extracted, difference characteristics of human body shapes are captured to enhance suspicious information detection during epidemics and sudden events, and the safety threatening factors are as follows:

(21)
$ g\left(x,y\right)=f\left(x,y\right)+\eta \left(x,y\right) $

where $f(x,y)$ is the spatial spectrum feature of suspicious information in the images, and $\eta (x,y)$ is the grayscale feature from security inspection colors based on sudden events. In summary, grayscale edge information decomposition is adopted to extract suspicious features from security images captured during epidemics and sudden events. Visual simulation was carried out based on LabVIEW.

4. Simulation and Results Analysis

In order to verify the performance of the proposed method for extraction of suspicious features from public security images taken during epidemics and sudden events, a simulation was conducted and analysis carried out based on LabVIEW. Public security images used included the Painting dataset, the OilPainting dataset, and the Pandora dataset. The Painting91 dataset contains 4266 images from 91 artists, including 2338 paintings from 50 artists done in 13 styles. The OilPainting dataset has 19,787 oil paintings in 17 artistic styles, and the Pandora dataset contains 7724 images in 12 artistic styles. In addition, these three datasets have different characteristics. Specifically, compared with the Painting91 style dataset, the OilPainting dataset divides styles in more detail. The Pandora dataset contains a wider range of categories from a long time span. From them, a training set and a testing set were created according to the settings of the data sets, containing 1250 images and 1088 images, respectively. The training process for suspicious feature extraction from security images captured during epidemics and sudden events was iterated 3000 times. The distribution of deep feature information from security images during epidemics and sudden events was 12, and the sampling delay was 0.25ms. Based on the above parameter settings, the suspicious feature extraction model for security images was constructed, and the security images obtained are shown in Fig. 3.

Taking the images in Fig. 3 as a sample, suspicious features of the images were detected. When the color of an image reached or equaled 255, it was a special object. Conversely, pixels with a grayscale value of 0 were removed to reflect the background or abnormal range of the object. The second value in the figure is the grayscale value. When the whole picture is presented, the influence of black and white is very significant; that is, in a single picture, 256 different light thresholds were selected. Secondly, the grayscale value was used to reflect overall and local characteristics of the image. A binary image is a key link in image processing, especially in a functional image. Therefore, a large amount of binary image data was produced, which can help the image processing to continue. It only contained 100 or 255 points, and was not affected by multiple pixel values. Generally, to obtain the desired two digital images the uncovered area is defined as the edge related to the closed boundary. The pixel beyond the aperture is a special object, with a grayscale of 255. Otherwise, these pixels are removed from the O-level object to show the background or a region of the object, thus obtaining the detection results of suspicious feature points from public security images based on epidemics and sudden events, as shown in Fig. 4. The central error distribution of video image detection is shown in Fig. 5.

By analyzing Figs. 4 and 5, we can know that in the process of forming images, there are many groups of features or incorrect basic elements of one image in another image due to noise and interference. At the same time, we can see from Fig. 5 that the method proposed in the study basically maintains a low error, and is superior to DF and MIL in most cases, with high detection accuracy. In the original image, the basic elements of the above features or errors are defect problems, which are usually solved within the framework of standardization through different contract terms. First, the HOPFIEK network was analyzed. When there are global constraints, compatibility constraints and adjacent constraints, the energy effect of HOPFIEK is fully demonstrated. The algorithm minimizing straight-line segments of two images improves the reliability of registration. In addition, methods such as the least square method and the voting method effectively eliminate false points and mismatches. In this paper, the calibration accuracy of suspicious feature points from security images during epidemics and sudden events is high, and the results from suspicious feature extraction were tested by different methods. The proposed method had a high identification rate for output features of suspicious feature extraction, and the time cost of suspicious feature extraction was tested. The comparison results are in Table 1, which shows that the time cost for the proposed method in suspicious feature extraction from security images during epidemics and sudden events is short.

Table 1. Time overhead test (unit: ms).

Iterations

Our method

SIFT algorithm

Harris algorithm

Dataset 1

3.554

12.184

21.135

Dataset 2

3.131

12.039

21.570

Dataset 3

3.820

12.838

21.044

Dataset 4

3.153

12.802

21.657

Fig. 3. Security images obtained.
../../Resources/ieie/IEIESPC.2023.12.4.283/fig3.png
Fig. 4. Feature detection in security images from public places during epidemics and sudden events.
../../Resources/ieie/IEIESPC.2023.12.4.283/fig4.png
Fig. 5. Diagrams for center error in a video sequence.
../../Resources/ieie/IEIESPC.2023.12.4.283/fig5.png

5. Conclusion

In this paper, suspicious features from public security images captured during epidemics and sudden events are extracted according to the undirected weighted graph recognition method. Based on cross-regional block fusion in suspicious feature extraction, the orthogonal wavelet scale decomposition method was adopted to detect suspicious information from public security images, and the semantic segmentation method was adopted to construct a fusion multilevel feature distribution model for public security images taken during epidemics and sudden events. Combined with multilevel visual feature analysis, suspicious information fusion processing was carried out to enhance image detection and recognition. This research shows that the proposed method has high output stability, a short time cost, and good reliability when extracting suspicious features in security images from public places during epidemics and sudden events, and it improves detection and recognition from those security images.

REFERENCES

1 
Khan S, Tufail M, Khan M T, et al. A novel framework for multiple ground target detection, recognition and inspection in precision agriculture applications using a UAV. Unmanned Systems, vol. 10, no. 1, pp. 45-56, 2022.DOI
2 
Jo H J, Choi W. A survey of attacks on controller area networks and corresponding countermeasures. IEEE Transactions on Intelligent Transportation Systems, 2021, 23(7): 6123-6141, 10.1109/TITS. 2021.3078740DOI
3 
Marino. A, Sugimoto. M, Ouchi. K, Hajnsek. I, ``Validating a notch filter for detection of targets at sea with ALOS-PALSAR data: Tokyo Bay,'' IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 12, pp. 74907-74918, 2014, 10.1109/JSTARS.2013.2273393DOI
4 
Daihong J, Sai Z, Lei D & Yueming D. Multi-scale generative adversarial network for image super-resolution. Soft Computing, 2022, 26(8): 3631-3641. https://doi.org/10.5244/c.26.135DOI
5 
Krizhevsky. A, Sutskever. I, Hinton. GE, ``ImageNet classification with deep convolutional neural-networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems,'' Red Hook: Curran Associates Inc., pp. 1097-1105, 2012.DOI
6 
Liu P, Zhou Y, Peng D~& Wu D. Global-Attention-Based Neural Networks for Vision Language Intelligence. IEEE/CAA Journal of Automatica Sinica, 2021, 8(7): 1243-1252, 10.1109/JAS.2020.1003402DOI
7 
Wang. FS, Wang. J, Li. B, ``Deep attribute learning based traffic sign detection,'' Journal of Jilin University (Engineering and Technology Edition), vol. 48, no. 1, pp. 319-329, 2018, 10.13229/j.cnki.jdxbgxb20161120DOI
8 
Quartulli. M, Datcu. M, ``Stochastic geometrical modeling for built-up area understanding from a single SAR intensity image with meter resolution,'' IEEE Transactions on Geoscience and Remote Sensing, vol. 42, no. 9, pp. 1996-2003, 2004, 10.1109/TGRS.2004.833391DOI
9 
Xu. XU, Fengli. Z, Guojun. W, Xiyou. F, Minmin. S, Zhikun. L, Xingdong. L, ``Building height retrieval from dual-aspect SAR images based on match of strong backscattering features,'' Remote Sensing Technology and Application, vol. 31, no. 1, pp. 149-156, 2016,URL
10 
Yu H, Cheng X, Chen C,Heidari, A. A., Liu, J., Cai, Z., & Chen, H. Apple leaf disease recognition method with improved residual network. Multimedia Tools and Applications, 2022, 81(6): 7759-7782. 10.1007/s11042-022-11915-2DOI
11 
Xu. C, Chen. Z, Hou. R, ``Deep learning classification method of Landsat 8 OLI images based on inaccurate prior knowledge,'' Journal of Computer Applications, vol. 40, no. 12, pp. 3550-3557, 2020,URL
12 
Sulla-Menashe. D., Gray. JM, Abercrombie. SP, Friedl. MA, ``Hierarchical mapping of annual global land cover 2001 to present: the MODIS Collection 6 Land Cover product,'' Remote Sensing of Environment, vol. 222, pp. 183-194, 2019,DOI
13 
Rodriguez-Galiano. VF, Ghimire. B, Rogan. J, Chica-Olmo. M, Rigol-Sanchez. JP, ``An assessment of the effectiveness of a random forest classifier for landcover classification,'' ISPRS Journal of Photogrammetry and Remote Sensing, vol. 67, pp. 93-104, 2012,DOI
14 
Zhang. C, Sargent. I, Pan. X, Li. H, Gardiner. A, Hare. J, Atkinson. PM, ``An object-based con-volutional neural Nntwork (OCNN) for urban land use classification,'' Remote Sensing of Environment, vol. 216, pp. 57-70, 2018, 10.1016/j.rse.2018.06.034DOI
15 
Homer. C, Dewitz. J, Yang. L, Jin. S, Danielson. P, Xian. G, Megown. K, ``Completion of the 2011 national land cover database for the conterminous united states-representing a decade of land cover change information,'' Photogrammetric Engineering and Remote Sensing, vol. 81, no. 5, pp. 345-354, 2015,DOI

Author

Liyan Shi
../../Resources/ieie/IEIESPC.2023.12.4.283/au1.png

Liyan Shi obtained her M.Sc. degree from Huazhong University of Science and Technology (2008). Presently, she is working as an associate professor in Information Engineering and Artificial Intelligence School, the open university of Henan. She has published nearly 10 articles. Her areas of interest include Image processing, software engineering, virtual reality technology.