Mobile QR Code QR CODE

2024

Acceptance Ratio

21%


  1. ( Department of Public Studies, Henan Vocational College of Nursing, Anyang, 455000, China wzw202312@126.com)
  2. ( Information Engineering Institute, Yellow River Conservancy Technical Institute, Kaifeng, 475004, China dongshujuan2005@126.com)



Pulse-coupled neural network, STM32, Sports goals, Tracking and testing, Mean shift

1. Introduction

In the digital age, the application of embedded systems in Motion Target Tracking and Detection (MTTD) is becoming increasingly significant. However, traditional methods face significant challenges in dealing with motion target problems in embedded environments with high real-time requirements and limited computing resources. The development of this field is limited by industry development bottlenecks, and new technological breakthroughs are urgently needed to improve the performance of embedded systems in target perception and response [1,2]. Meanwhile, with the Internet of Things (IoT)and other technologies developing continuously, the demand for more intelligent, real-time, and efficient embedded target tracking solutions has become increasingly urgent. In the current industry environment, traditional embedded systems face limitations in algorithm efficiency and real-time performance in processing complex moving targets. This directly restricts its widespread application in fields such as intelligent monitoring and autonomous driving. The demand for low-power, high-performance embedded target tracking solutions in the industry is increasing, and the call for technological innovation is also growing. Pulse-coupled neural network (PCNN), as a bioheuristic neural network (NN), has been widely used in fields such as image processing, object detection, and pattern recognition. Due to its biologically inspired design, PCNN has good performance in processing features such as edges and textures in images [3,4]. In practical applications, it has been used for tasks such as image segmentation and object tracking, demonstrating advantages over traditional NN in certain aspects. Therefore, the research focuses on the exploration of STM32 embedded MTTD based on PCNN. By comprehensively analyzing the development needs of the industry, there are shortcomings in the existing research and challenges faced by embedded systems.This study realizes that the combination of PCNN and STM32 is expected to bring new breakthroughs in the embedded target tracking. The innovation of this study lies in the combination of PCNN and STM32 embedded systems, which can achieve efficient MTTD. The research aims to provide an efficient, real-time, and low-power target tracking solution for embedded systems. It hopes to promote the application of embedded technology in the intelligence to meet the industry's continuous pursuit of advanced target perception and response technologies.

The study consists of five parts. Firstly, the research background, problems, and solutions of embedded MTTD are introduced. Secondly, the research status of embedded MTTD by previous scholars is reviewed, and the existing difficulties and shortcomings of the methods are summarized. The third part proposes an STM32 embedded MTTD based on PCNN. Next, the optimization effect of the algorithm is evaluated through comparative experiments and efficiency validation. Finally, the research method is summarized, and its shortcomings and future research directions are pointed out.

2. Related Works

In the digital age, the application of embedded systems in MTTD is gradually becoming prominent. However, traditional methods face significant challenges in dealing with moving targets with high real-time requirements and limited computing resources in embedded environments. Huang et al. proposed a spatiotemporal joint algorithm model for weak target detection in complex moving backgrounds, which detected targets through time-domain background suppression and spatiotemporal target enhancement. These results confirm that this algorithm can effectively detect targets and extract trajectories under low-speed background and high-speed target conditions, and is suitable for low signal-to-noise ratio scenarios [5]. Zhang et al. proposed a strategy based on improved particle swarm optimization (IPSO) to optimize target tracking and detection performance for distributed MIMO radar in hostile environments. These results confirm that IPSO reduces computational complexity while approaching exhaustive search performance, making it more advantageous compared to multi starting local search [6]. Doukhi et al. proposed a UAV vision guided tracking method based on YOLO depth learning algorithm and Nvidia Jetson TX2 edge computing module, aiming at the poor complex robustness of real-time target detection and tracking. These results confirm that this method exhibits good robustness and effectiveness in various simulation and real-time flight experiments [7]. Zhang et al. proposed a video annotation method based on adaptive Gaussian mixture model, Camshift, and Kalman filter to address the information retrieval problem caused by the rapid growth of video data volume, achieving detection and tracking of moving targets. These results confirm that the annotation effect of this method is good, providing an effective application foundation for video indexing and search [8].

On the other hand, PCNN has been widely applied in image processing, object detection, and pattern recognition. Many scholars have conducted a series of studies on it. He et al. proposed a dual band infrared image fusion method based on feature extraction and a novel multi-PCNN to improve the infrared image quality. The method modulated the main PCNN and optimized the connection strength using Laplacian energy. These results confirm that this method has a greater amount of fused image information, rich details, clear edges, which is superior to traditional methods [9]. Niepceron et al. proposed a selective search algorithm combined with simplified PCNN for fast and accurate tumor detection to address the time-consuming and specialized knowledge dependent MRI analysis of brain tumors. These results confirm that the method performs well in both computational cost and detection accuracy, providing a new direction for the development of light brain tumor detection systems [10]. 3D-PCNN has the problem of complex parameters and low accuracy in color image segmentation. Therefore, Xing et al. proposed using the whale optimization algorithm to optimize model parameters. These results confirm that this method can achieve more accurate and efficient image segmentation results [11]. Li et al. proposed a new method using linear decreasing threshold instead of an exponential one to address the slow speed of traditional PCNN in remote sensing image segmentation. These results confirm that the improved PCNN is faster in remote sensing image segmentation and more suitable for fast target recognition, which effectively improves segmentation efficiency and accuracy [12].

In summary, the embedded MTTD problem has been addressed to some extent. However, there are still issues such as high real-time requirements and limited computing resources. PCNN, as an artificial neural network (ANN) inspired by the biological nervous system, simulates the interaction between neurons in the brain and processes information through the transmission and coupling of pulse signals. The characteristics of PCNN are its parallel processing ability and good adaptability to tasks such as pattern recognition and image processing. Therefore, combining PCNN with embedded systems such as STM32 in research can fully leverage the advantages of PCNN in parallelism, real-time performance, and resource efficiency, providing an effective solution for MTTD.

3. Motion Target Tracking and Detection Method based on Pcnn

Based on the research of traditional object detection algorithms and PCNN, a new detection algorithm has been proposed. It focuses on solving the complexity and difficulty of PCNN parameter selection. By introducing the optimal family genetic algorithm, the study aims to optimize the parameters of PCNN and achieve accurate recognition of moving targets.

3.1 STM32 Embedded Motion Target Tracking and Detection Method

3.1.1 STM32 architecture

An embedded system is application centric and built on a computer foundation. The customizable software and hardware configuration is equipped with dedicated microprocessors, focusing on the design of customized software, decoders, and simulators for traditional computer use [13]. Its hardware includes high-performance microprocessors and various interface circuits, while its software covers real-time operating systems and application software. STM32 is an ARM Cortex-3 core microcontroller launched by STMicroelectronics, which has the characteristics of high performance, low cost, and low power consumption [14]. The study adopts a hardware control platform based on STM32 microprocessor, combined with motion target detection and tracking algorithm for interactive software design. Real time control of the camera is achieved through the rotating platform of the mechanical transmission system to complete MTTD. Fig. 1(a) shows the functional modules of the system. Fig. 1(b) is a schematic diagram of the mechanical rotation part studied.

Fig. 1. Functional module diagram of the system and schematic diagram of mechanical rotation part.

../../Resources/ieie/IEIESPC.2025.14.3.366/image1.png

Fig. 2. JTAG interface schematic diagram.

../../Resources/ieie/IEIESPC.2025.14.3.366/image2.png

In Fig. 1(a), the main working modules of the control platform hardware system include power supply, LCD screen, OV7725 camera, JTAG interface, and motor drive control. A 3.2-inch LCD screen with a resolution of 320x240 is selected as the display screen, and data are stored using the Flexible Static Memory Controller (FSMC) on the STM32 [15]. The LCD controller undergoes a series of conversions to convert the collected signal data into RGB format and write it into the Graphics Random Access Memory (GRAM). In Fig. 1(b), the entire control module is lightweight and compact, and the mechanical rotation system operates stably. Due to the weight and installation requirements of the camera, only one flange type deep groove ball bearing is used for fixation in both horizontal and vertical directions. Fig. 2 is theJTAG interface.

In the JTAG interface schematic in Fig. 2, control and communication are achieved through four control lines (Test Mode Select: TMS, Test Clock: TCK, Test Data In: TDI, Test Data Out: TDO). TCK receives the test clock input, TDI is used for data input, TDO outputs data from the JTAG port, and TMS sets the JTAG port to a specific test mode.

3.1.2 Introduction to existing PCNN

Unlike traditional pulsed NN and ANN, PCNN simulates information transmission between neurons through pulses [16,17]. Fig. 3 shows the structure of PCNN neurons.

Fig. 3. Structure of PCNN neurons.

../../Resources/ieie/IEIESPC.2025.14.3.366/image3.png

In Fig. 3, the pulse coupled neuron model consists of three main parts: receiving domain, modulation, and pulse generation. The reception threshold consists of a linking channel and a feedback channel. The receiving domain receives input and feedback information from adjacent neurons. The feedback channel not only receives information between adjacent neurons, but also receives external stimuli, represented by Eq. (1).

(1)
$ F_{ij} [n]=F_{i} {}_{j} [n-1]e^{-aF} +V_{F} \sum M_{ijkl} Y_{kl} [n-1] +I_{ij} . $

In Eq. (1), the ignition information is represented by $Y_{kl} $. The time decay constant is $a_{F} $. The external input signal is represented by $I_{ij} $. $M$ is the connection weight matrix. The feedback of the $(i,j)$-th neuron is $F_{ij} [n]$. Eq. (2) is the operational relationship of synaptic input information, used to describe the reception of input between adjacent neurons.

(2)
$ L_{ij} [n]=L_{i} {}_{j} [n-1]e^{-aL} +V_{L} \sum M_{ijkl} Y_{kl} [n-1] . $

In Eq. (2), $V_{L} $ represents the feedback constant. The linear input of the connected neuron of the $(i,j)$-th neuron is $L_{ij} [n]$. The internal activities are represented by Eq. (3).

(3)
$ U_{ij} [n]=(1+\beta L_{ij} [n]F_{ij} [n] . $

In Eq. (3), $\beta $ represents the modulation constant between neurons. Whether neurons emit pulses depends on the relationship between internal activity $U_{ij} [n]$ and threshold $E_{ij} [n]$, represented by Eq. (4).

(4)
$ Y_{ij} [n] =step(U_{ij} [n]-E_{ij} [n-1])\\ =\left\{\begin{aligned}1, U_{ij} [n]>E_{ij} [n-1],\\0,\text{others}.\end{aligned}\right. $

In Eq. (4), the ignition function $Y_{ij} $ represents neuronal ignition when it is equal to 1. When it is equal to 0, it indicates no ignition. The mathematical expression of $E_{ij} [n]$ is represented by Eq. (5).

(5)
$ E_{ij} [n]=E_{ij} [n-1]e^{-aE} +V_{E} Y_{ij} [n-1] . $

In Eq. (5), $V_{E} $ represents the threshold constant. $aE$ is the time decay constant. $E_{ij} $ represents the threshold for change.

3.1.3 OFGA-based PCNN parameter optimization

To address the slow modeling and poor detection performance of traditional algorithms, an optimized PCNN background differential motion target detection algorithm is proposed. It adopts the dual threshold idea to simplify PCNN and improve it, while introducing the Optimal Family Genetic Algorithm (OFGA). Fig. 4 is a simplified flowchart of the PCNN parameter optimization algorithm based on OFGA.

Fig. 4. Flowchart of PCNN parameter optimization algorithm based on OFGA.

../../Resources/ieie/IEIESPC.2025.14.3.366/image4.png

In Fig. 4, the PCNN parameter optimization algorithm based on OFGA first receives image input, and then initializes it through the OFGA system, using the PCNN system to calculate fitness.Traditional PCNN relies on experience in initial parameter settings, and parameter selection may be poor, which can affect convergence speed and performance. OFGA-PCNN utilizes genetic algorithms to automatically optimize initial parameters, avoiding the uncertainty of manually set parameters and quickly finding the global optimal parameter combination, thereby accelerating convergence. Genetic algorithms have strong global search capabilities and can find optimal solutions in complex search spaces. Through genetic algorithms, OFGA-PCNN can avoid falling into local optima, thereby improving the global optimal performance of the network. In addition, genetic algorithms can efficiently explore the search space and accelerate the training process of the network through selection, crossover, and mutation operations. In this experiment, it is determined that the termination condition is met. If the condition is met, the image is output. Otherwise, genetic operation is performed and a new population is generated. During this process, the key parameters of PCNN are decoded to achieve optimization. After simplification, the mathematical expression of $F_{ij} [n]$ is represented by Eq. (6).

(6)
$ F_{ij} [n]=I_{ij} . $

In Eq. (6), $I_{ij} $ represents the external input stimulus signal. The linear input $L_{ij} [n]$ of the connected neurons of the $(i,j)$-th neuron is expressed mathematically using Eq. (7).

(7)
$ L_{ij} [n]=\sum V_{L} W_{ijkl} Y_{kl} [n-1] . $

$Y$ represents whether the neuron ignites in Eq. (7). $V_{L} $ is a connection constant. The mathematical expression of $Y_{ij} [n]$ is represented by Eq. (8).

(8)
$ Y_{ij} [n]=\left\{\begin{array}{l} {1{\rm \; \; U}_{ij} [n]>E_{ij} [n-1]} \\ {0{\rm \; \; others}} \end{array}\right. $

In Eq. (8), ${\rm U}_{ij} [n]$ represents the feedback input of the $(i,j)$-th neuron. $E_{ij} [n-1]$ is a threshold function, and its threshold is crucial. Usually, traditional algorithms use a single fixed threshold and set $E_{ij} [n-1]$ as a constant. However, a threshold that is too small can lead to significant noise, while a threshold that is too large may result in incomplete motion detection results. Therefore, the dual threshold idea is introduced into each iteration of PCNN, and an improved PCNN motion detection algorithm is proposed. The dual threshold idea uses upper and lower thresholds in binary threshold segmentation [18].

3.2 Motion Target Tracking and Detection Method on the Foundation of Pulse-Coupled Neural Network

The research on target tracking algorithms is committed to achieving real-time tracking of detected moving targets. By confirming the shape, contour, or position of the moving object in each frame of the image, target tracking calculates the direction and trajectory of the object's motion, and provides timely feedback on the object's future motion status. Mean Shift is a common target tracking algorithm [19]. Fig. 5 shows the Mean Shift vector.

Fig. 5. The mean shift vector.

../../Resources/ieie/IEIESPC.2025.14.3.366/image5.png

In Fig. 5, the red dot is $x$. The dark blue center points around are sample point $x_{i} $. The arrow represents the offsetting vector of $x_{i} $ relative to $x$. The average offset points towards the direction of the densest sample points, which is the gradient direction. Mean Shift includes two parts: target representation and target localization. The user selects the target area and candidate window, and establishes a feature histogram to represent the target model and candidate model. In motion target tracking, the target model is determined, the similarity with the candidate model is calculated, and the transfer vector is obtained. By iteratively calculating the Mean Shift vector, the target converges to its true position, completing the update of the target position and achieving target tracking [20]. In the feature space, the center of the target region is the origin, and the center of the candidate region in subsequent frames is another position. Therefore, the feature vectors of the target model are represented by Eq. (9).

(9)
$ \left\{\begin{aligned} {\stackrel{\frown}{q}=\{ \stackrel{\frown}{q}_{u} \} _{u=1...m} }, \\ {\sum _{i=1}^{m}\stackrel{\frown}{q}_{u} =1 }. \end{aligned}\right. $

In Eq. (9), $m$ represents the dimension of the feature vector. To conduct research, it should establish a large number of motion target models for training. Eq. (10) shows the library of input samples.

(10)
$ X=\{ g_{1,1},~g_{1,2},~g_{1,3},~\ldots,~g_{c,n} \} . $

In Eq. (10), $g_{c,n} $ represents the $n$-th action segment. The clip contains $m$'s movements and postures. The calculation of $g_{c,n} $ action film in the $m$-th posture is represented by Eq. (11).

(11)
$ g_{c,n} =\{ P_{1},~P_{2},~P_{3},~\ldots,~P_{m} \} . $

After a complete action segment is projected to the output space, a ``trajectory'' will be formed in the output space and a set of index numbers containing timing information will be obtained. Eq. (12) is the specific calculation.

(12)
$ O_{c,n} (t)=(o_{t} \},~t\in T. $

In Eq. (12), $O_{c,n} $ is used to identify the index sequence of targets for each category. According to the histogram statistics rule, a histogram of an action sequence containing $n$ poses is represented by Eq. (13).

(13)
$ H(o_{u} )L_{c,n} =\frac{f_{u} }{n} $

In Eq. (13), $f$ represents the frequency of the occurrence of the $u$-th output node in the action. $N$ is the quantity of postures included in the action. New input actions are computed by matching Euclidean distances to known action templates to discriminate the class of unknown actions. The difference in action is calculated by the normalized inner product of the feature vectors of two poses, represented by Eq. (14).

(14)
$ d(p_{i},p_{j} )=\sqrt{\sum _{k=1}^{N}w_{k} (\frac{f_{i,k} -f_{j,k} }{f\_ _{k} (\max )-f\_ _{k} (\min )} } . $

In Eq. (14), $f_{i,k} $ and $f_{j,k} $ are the $k$-th eigenvector values of attitudes $p_{i} $ and $p_{j} $, respectively. $f_{k} (\max )$ is the $k$-th feature's maximum value. $f_{k} (\min )$ represents this feature's minimum value. $w_{k} $meansits weight. The common frame is the foundation of the algorithm, used to ensure that the front and back actions have similar poses in order to achieve smooth transitions. According to our understanding, for any two pose data, Eq. (15) can be used to measure the distance between their center of gravity.

(15)
$ D_{R} (f_{i},f_{i} )=\sqrt{(x_{i} -x_{j} )^{2} +(y_{i} -y_{j} )^{2} +(z_{i} -z_{j} )^{2} } . $

In Eq. (15), $(x,y,z)$means the gravity center's coordinates of the human body. Conduct real-time tracking experiments based on the traditional Mean Shift tracking algorithm and the optimal family optimized PCNN motion target detection algorithm. Fig. 6 shows the entire algorithm process.

Fig. 6. Flow chart of detection algorithm.

../../Resources/ieie/IEIESPC.2025.14.3.366/image6.png

In Fig. 6, first, Mean Shift is used to represent and locate the initial target, and feature histograms of the target model and candidate model are established. Subsequently, the targeting and candidate models' similarity is calculated using the optimal family optimized PCNN, and the transition vector is obtained. By iteratively updating the Mean Shift vector, the target quickly and accurately converges to its true position, thereby achieving real-time tracking and detection of moving targets.

4. Performance Analysis of Motion Target Tracking and Detection Methods

To verify the effectiveness of the new algorithm, a study selected videos in the OTB2015 dataset that contain complex interferences such as occlusion, scale changes, and target out of view. A comprehensive comparative experiment was conducted between the research algorithm and four superior tracking methods on the platform. To more intuitively demonstrate the effectiveness of the algorithm, the study selected five representative video samples for a detailed display of the detection results.

4.1 Performance Verification of Target Tracking and Detection Methods

When conducting performance experiments on video sequences, the experimental computing environment was an Intel Core i5-2450 2.5GHz processor, 4G DDR3 memory, running on a 64 bit Microsoft Windows 7 operating system, and a 1G NVIDIA GT630M graphics card. The software environments used included VS2010, Intel C++, and OpenCV2.4.8. The video was collected from the school laboratory and contains a total of 300 frames.

Fig. 7. Dataset example.

../../Resources/ieie/IEIESPC.2025.14.3.366/image7.png

As shown in Fig. 7, From the perspective of a single image, the video exhibits the following characteristics: the laboratory serves as the background, and the human target has a high degree of separability from the background. Some color features of the characters are similar to the background, such as the dark stripes on the clothes and the computer in the back, and the white stripes on the walls. The person is close to the camera and there is no obstruction, and the target's movement speed is moderate. To further explore the performance gain of OFGA optimization algorithm in PCNN motion detection, the additional computational complexity brought by this algorithm was considered in standard surveillance video experiments. The study adopted the GPU universal computing architecture, further introduced the NVIDIA GTX580 2G graphics card, and equipped it with VS2008 and CUDA4.0 software environments. Fig. 8 shows the evolution curves of GA and OFGA algorithms for solving PCNN parameter optimization problems.

Fig. 8. Evolution curves of GA and OFGA algorithms for solving PCNN parameter optimization problems.

../../Resources/ieie/IEIESPC.2025.14.3.366/image8.png

In Fig. 8, the two algorithms used the same parameter settings (pm $=0.75$, P $=0.00$). According to Fig. 8(a), the number of dominant families in OFGA was 10, and the fitness value was the average value of the algorithm after 20 iterations. According to Fig. 8(b), the ordinary GA had the problem of slow convergence speed and easy falling into premature (i.e., local optimal solution). The improved OFGA had greatly improved this issue. By searching in the micro space around dominant individuals, OFGA achieved significant acceleration in the early stages of evolution and successfully subdivided the search space, greatly reducing the probability of the algorithm falling into premature convergence. Precision and success rate were used as two evaluation metrics to assess the performance comprehensively, which was compared with four excellent algorithms on the OTB platform. The detailed information of the algorithm used is shown in Table 1.

Table 1. Algorithm information table.

Serial Number

Author

Topic

Algorithm

[21]

Zhou S

Rotor Attitude Estimation for Spherical Motors Using Multiobject Kalman KCF Algorithm in Monocular Vision

fDSST, KCF

[22]

Daderwal M C

Montreal Cognitive Assessment (MoCA) and Digit Symbol Substitution Test (DSST) as a screening tool for evaluation of cognitive deficits in schizophrenia.

DSST

[23]

Dong S

Nonlinear gain-based event-triggered tracking control of a marine surface vessel with output constraints

Struck

All experiments were conducted as a single evaluation, with an overlap threshold of 0.5 and a center position error threshold of 20. Fig. 9 shows the comparison of accuracy and success rate between the proposed algorithm and other algorithms.

Fig. 9. Comparisonresults of different algorithms.

../../Resources/ieie/IEIESPC.2025.14.3.366/image9.png

In Fig. 9, DSSTPF exhibited excellent performance. In Fig. 9(a), its accuracy reached 86.6%. In Fig. 9(b), its success rate was 81%. Compared with the other four algorithms that performed better in OTB2015, DSSTPF showed a significant improvement in overall data. Compared to DSST, thisresearchmodel's accuracy and success rate had been improved by 12.7% and 14%, respectively. So it has better performance compared to other algorithms. The ROC curve of the model was analyzed, and the results are shown in Fig. 10.

Fig. 10. Comparison of ROC curves of models.

../../Resources/ieie/IEIESPC.2025.14.3.366/image10.png

Fig. 10(a) shows the ROC curve of the SOTA model, and Fig. 10(b) shows the ROC curve of the model proposed in this study. As shown in Fig. 10, the area under the curve of the proposed model is significantly larger than that of the SOTA model. The research results indicate that the proposed model has excellent performance.Table 2 shows its accuracy and success rate compared to the other four algorithms (fDSST, DSST, KCF, Struck) on different attributes.Three datasets were used, namely the Channel and Color Cluttered dataset, the Low Resolution dataset, and the Blurred and Cluttered dataset.

Table 2. Comparison results with different interference attributes.

Data set

CCC

LR

BC

This text

0.892

0.668

0.783

0.835

0.652

0.736

fDSST

0.760

0.457

0.842

0.691

0.465

0.740

DSST

0.706

0.498

0.690

0.642

0.496

0.627

According to the data in Table 2, the proposed improved algorithm performs well in all aspects. In these cases, its accuracy and success rate are significantly higher than other algorithms. Overall, the algorithm proposed by the research institute has achieved satisfactory tracking results, although there is room for improvement under certain specific attributes.

4.2 Performance Verification of Target Tracking and Detection Methods under Different Influences

To further investigate the adaptability to lens motion or background disturbances, Table 3 lists six commonly used tracking models and the F1 Mean values of the proposed model. This value effectively reflects the algorithm's ability to extract salient targets and suppress non-salient targets.

Table 3. F1-measure comparison results.

Algorithm

This text

FT

GBVS

PQFT

AIM

PCT

SR

Fountain

0.26

0.06

0.13

0.17

0.16

0.14

0.07

Walking

0.31

0.06

0.38

0.27

0.30

0.31

0.22

Parachute

0.37

0.30

0.19

0.23

0.18

0.27

0.13

Stefan

0.22

0.03

0.06

0.05

0.04

0.05

0.06

In Table 3, this model performed excellently in target tracking when facing camera motion or background disturbances. It is worth noting that the proposed method demonstrated better performance compared to various other methods in three different video scenes. For the Walking database, this method had also shown superior performance in various methods besides GBVS. Fig. 10 shows a comparison of algorithm accuracy and success rate under gear attributes.

Fig. 11. Comparison of algorithm accuracy and success rate under block attribute.

../../Resources/ieie/IEIESPC.2025.14.3.366/image11.png

According to Fig. 11(a), the proposed algorithm maintained an accuracy of 89.1% in the presence of occlusion. According to Fig. 11(b), the proposed algorithm maintained success rates of 89.1% and 83.4% in the presence of occlusion, which had the best tracking performance compared to the other four algorithms. The study used the Human3.6M dataset for training, which contained approximately 3.6 million annotated human motion data samples and their corresponding RGB images. The entire dataset can be divided into 11 major categories, each consisting of 15 subcategories. These major categories mainly represent 11 different professional model experimenters, while the subcategories mainly cover different human movements. Fig. 12 shows the results obtained by three different models using average error as the evaluation metric in the Human3.6M dataset.

Fig. 12. Comparison of identification errors of different methods on Human3.6M dataset.

../../Resources/ieie/IEIESPC.2025.14.3.366/image12.png

According to Fig. 12(a), except for A, the proposed method had the smallest error compared to other research methods proposed by Du's et al., with values of 118, 95, 96, 137, and 112 on B, C, D, E, and F, respectively. According to Fig. 12(b), except for d, the proposed method had the smallest error compared to other methods proposed by Du's and others, with errors of 99, 120, 115, 117, and 112 on a, b, c, e, and f, respectively.Compare the real-time performance and computational resource consumption of various models. The results are shown in Table 4.

Table 4. Comparison of real-time comprehensive performance of algorithm models.

Time/s

5

10

20

40

80

Type

A

B

A

B

A

B

A

B

A

B

This text

86.6

5.6

85.3

4.3

85.6

6.6

88.2

2.7

88.9

5.3

FT

85.2

8.9

83.9

7.3

83.6

9.6

86.8

5.7

87.5

8.3

GBVS

81.3

7.8

78.7

6.2

79.7

8.5

82.9

4.6

83.6

7.2

PQFT

76.6

10.6

74.6

9.6

75.9

11.9

78.2

8.1

78.9

10.6

AIM

82.3

9.3

79.7

7.7

80.7

10.1

83.9

6.1

84.6

8.7

PCT

78.9

14.6

76.3

13.1

77.3

15.4

80.5

11.5

81.2

14.1

SR

83.6

11.3

81.6

9.7

82.6

12.6

85.2

8.1

85.9

10.7

In Table 4, A represents real-time performance and B represents computational resource consumption. As shown in the table, with the increase of time, the model proposed in this study has consistently achieved high real-time performance and low computational resource consumption. Excelling in all performances. The experimental results show that the proposed model performs well among various models.

5. Conclusion

As IoT and intelligent technology rapidly develop, the demand for real-time and accurate MTTD in various fields is becoming increasingly urgent. Higher requirements have been put forward for efficient target processing in embedded systems, especially in application scenarios such as intelligent monitoring, autonomous driving, and robotics. This study focuses on the challenges of MTTD in embedded systems and proposes solutions to address the limitations of traditional embedded systems in handling complex environments. The study combines PCNN and STM32 embedded platforms, fully utilizing the parallel processing characteristics of PCNN to achieve fast and accurate feature extraction and recognition of moving targets. By optimizing algorithms and hardware design, low-power, real-time, and high-precision target tracking and detection have been achieved on STM32. These results confirm that in the accuracy and success rate experiments, the research method has improved accuracy and success rate by 12.7% and 14%, respectively, which has significant advantages over other algorithms. The proposed improved algorithm performs well in handling occlusion, field of view removal, and scale transformation attributes. It can still maintain accuracy at 89.1% in the presence of occlusion. This study has made significant progress in STM32 embedded MTTD based on PCNN, but has not explored in detail the complexity and cost of hardware implementation. Future research can focus on optimizing hardware design, improving system practicality and cost-effectiveness.

REFERENCES

1 
A. Al-Hasaeri, A. Marjanovic, P. Tadic, S. Vujnovic, and Z. Durovic, ``Probability of detection and clutter rate estimation in target tracking systems: Generalised maximum likelihood approach,'' Radar, Sonar & Navigation, IET, vol. 13, no. 11, pp. 1963-1973, November 2019.DOI
2 
Z.-A. Ansari, M-J. Nigam, and A. Kumar, ``Accurate tracking of manoeuvring target using scale estimation and detection,'' Defence Science Journal, vol. 69, no. 5, pp. 495-502, September 2019.DOI
3 
A.-S. Begum, T. Kalaiselvi, and K. Rahimunnisa, ``A computer aided breast cancer detection using unit-linking pulse coupled neural network & multiphase level set method,'' Journal of Biomaterials and Tissue Engineering, vol. 12, no. 8, pp. 1497-1504, August 2022.DOI
4 
G. Wang and Y. Huang, ``Medical-image-fusion algorithm based on a detail-enhanced and pulse-coupled neural-network model stimulated by Pparallel features,'' Scientia Sinica Informationis, vol. 50, no. 2, pp. 239-260, May 2020.DOI
5 
P. Huang, F. Wang, Y. Xiang, and H.You, ``Moving target detection and tracking of satellite videos based on V-CSK algorithm,'' Journal of University of Chinese Academy of Sciences, vol. 38, no. 3, pp. 392-401, November 2021.DOI
6 
H. Zhang, J. Xie, J. Ge, Z. Zhang, and W.Lu, ``Finite sensor selection algorithm in distributed MIMO radar for joint target tracking and detection,'' Journal of Systems Engineering and Electronics, vol. 31, no. 2, pp. 290-302, April 2020.DOI
7 
O. Doukhi, S. Hossain, and D-J. Lee, ``Real-time deep learning for moving target detection and tracking using unmanned aerial vehicle,'' Journal of Institute of Control, vol. 26, no. 5, pp. 295-301, May 2020.DOI
8 
B. Zhang, ``Moving target detection and tracking based on camshift algorithm and Kalman filter in sport video,'' International Journal of Performability Engineering, vol. 15, no. 1, pp. 288-297, January 2019.DOI
9 
Y. He, S. Wei, T. Yang, W. Jin, M. Liu, and X. Zhai, ``Feature-based fusion of dual band infrared image using multiple pulse coupled neural network,'' Journal of Beijing Institute of Technology, vol. 28, no. 01, pp. 133-140, March 2019.DOI
10 
B. Niepceron, F. Grassia, and A-N-S. Moh, ``Brain tumor detection using selective search and pulse-coupled neural network feature extraction,'' Computing and Informatics, vol. 41, no. 1, pp. 253-271, April 2022.DOI
11 
Z. Xing, H. Jia, and W. Song, ``3DPCNN based on whale optimization algorithm for color image segmentation,'' Journal of Intelligent and Fuzzy Systems, vol. 37, no. 1-3, pp. 1-13, July 2019.DOI
12 
C. Li, H. Gao, Y. Yang, X. Qu, and W. Yuan, ``Segmentation method of high-resolution remote sensing image for fast target recognition,'' International Journal of Robotics & Automation, vol. 3, no. 34, pp. 216-224, January 2019.DOI
13 
Z. Yang, J. Lian, Y. Guo, S. Li, D. Wang, W. Sun, and Y. Ma, ``An overview of PCNN model's development and its application in image processing,'' Archives of Computational Methods in Engineering, vol. 26, no. 2, pp. 491-505, January 2019.DOI
14 
L. Zuo, F. Xu, and C. Zhang, ``A multi-layer spiking neural network-based approach to bearing fault diagnosis,'' Reliability Engineering and System Safety, vol. 225, no. Sep, pp. 2-14, May 2022.DOI
15 
S.-S. Saranya and N-S. Fatima, ``IoT information status using data fusion and feature extraction method,'' Computers, Materials & Continua, vol. 70, no. 1, pp. 1857-1874, February 2022.DOI
16 
H. Pan, W. Sun, and Q. Sun, ``Deep learning based data fusion for sensor fault diagnosis and tolerance in autonomous vehicles,'' Chinese Journal of Mechanical Engineering, vol. 34, no. 3, pp. 72, July 2021.DOI
17 
S. Filist, R.-T. Al-Kasasbeh, and O. Shatalova, ``Biotechnical system based on fuzzy logic prediction for surgical risk classification using analysis of current-voltage characteristics of acupuncture points,'' Journal of Integrative Medicine, vol. 20, no. 3, pp. 252-264, May 2022.DOI
18 
C. Hebbi and H. Mamatha, ``Comprehensive dataset building and recognition of isolated handwritten Kannada characters using machine learning models,'' Artificial Intelligence and Applications, vol. 1, no. 3, pp. 179-190, April 2023.DOI
19 
C. Hebbi and H. Mamatha, ``Comprehensive dataset building and recognition of isolated handwritten Kannada characters using machine learning models,'' Artificial Intelligence and Applications, vol. 1, no. 3, pp. 179-190, April 2023.DOI
20 
G. Bandewad, K.-P. Datta, B.-W. Gawali, and S.-N. Pawar, ``Review on discrimination of hazardous gases by smart sensing technology,'' Artificial Intelligence and Applications, vol. 1, no. 2, pp. 86-97, February 2023.DOI
21 
S. Zhou, G. Li, Q. Wang, J. Xu, Q. Ye, and S. Gao, ``Rotor attitude estimation for spherical motors using multiobject Kalman KCF algorithm in monocular vision,'' IEEE Transactions on Industrial Electronics, vol. 70, no. 1, pp. 1-13, January 2023.DOI
22 
M. C. Daderwal, V. S. Sreeraj, S. Suhas, N. P. Rao, and G. Venkatasubramanian, ``Montreal cognitive assessment (MoCA) and digit symbol substitution test (DSST) as a screening tool for evaluation of cognitive deficits in schizophrenia,'' Psychiatry Research, vol. 7, no. 319, pp. 114731-114739, 2022.DOI
23 
S. Dong, Z. Shen, L. Zhou, and H. Yu, ``Nonlinear gain-based event-triggered tracking control of a marine surface vessel with output constraints,'' Ocean Engineering, vol. 262, no. 10, pp. 1-12, 2022.DOI

Author

Zhongwei Wang
../../Resources/ieie/IEIESPC.2025.14.3.366/author1.png

Zhongwei Wang is an Associate Professor of Physical Education at Henan Vocational College of Nursing. He graduated from Henan University in 2001 with bachelor degree in Physical Education. In 2010, he received master degree in Physical Education and Training from Henan University. His research interests include sports training and physical education

Shujuan Dong
../../Resources/ieie/IEIESPC.2025.14.3.366/author2.png

Shujuan Dong s a Professor of Computer Science and Technology at Yellow River Conservancy Technical Institute, She graduated from Wuhan University of Technology with Bachelor of Engineering degree in 2001, She received Master of Science degree in Basic Mathematics from Henan University in 2011. Her research direction is software development.