Mobile QR Code QR CODE

  1. (College of Art and Design, Hunan First Normal University, Changsha 410000, China)



CNN, Chinese painting, Action recognition, Image classification, Blended teaching

1. Introduction

Chinese painting education in colleges and universities is a kind of appreciation and aesthetic education. Teachers often use ‘cramming’ to instill theoretical knowledge, ignoring the cultivation of students' emotional identity [1]. Chinese painting education is a kind of traditional cultural education to some extent, and the teaching of emotional attitudes and values is more important than the teaching of theoretical knowledge [2]. Emotional teaching is a process of "transposition thinking" in teaching activities. Therefore, a more immersive teaching method is needed to realize this process [3]. This study employs a convolutional neural network with a weight-shifting module to introduce affective mechanisms creatively into a blended teaching approach. Teachers can observe students' emotions through action recognition algorithms and adjust teaching methods and teaching content to improve the overall teaching effect. Various scene settings are used to realize "empathy". Micro-videos and other forms are used as online teaching support to restore the background of Chinese painting creation. The accuracy of the convolutional neural network in recognizing student actions and emotions can be improved by making the offset according to the dynamic information difference between the learned features and the original data. The learning and understanding ability of the dynamic information in the network degree time domain was optimized to improve the classification of the model precision.

2. Related Work

With the development of multimedia information technology and the improvement of Internet technology, the model based on Internet + education has gradually emerged. Online-offline blended teaching models have gradually become the mainstream model of college teaching activities owing to the impact of the novel coronavirus epidemic [4]. Many scholars at home and abroad have researched this topic. Aguirre T adopted a blended method to acquire and analyze quantitative and qualitative data. The multiple regression method was used to analyze the changes in teachers’ working hours when they adopted digital education methods [5]. Gao et al. used Likert-type scale scores to survey students. The scores of students accepting offline teaching were 90.4${\pm}$0.9. The experimental results showed that s combination of online and offline teaching has good prospects [6]. Garrich used software-defined networking and network function virtualization technology to combine the flexibility and programmability of network infrastructure. This technology has been applied in many fields, such as online classrooms, live broadcast platforms, and video interaction [7]. Carter organized four online workshops and webinars to advocate a joint global forest observation agency. The organization will consider a hybrid online-offline mode in the future (Ed note: An endash should separate items of equal value.) [8]. Bradley VC introduced the ``online mode'' into the field of ore analysis, and proposed a method to quantify rare earth elements online accurately. To increase the measurement uncertainty, isotope dilution mass spectrometry was used for other quantification techniques. The extraction of rare earth elements from the uranium matrix was achieved by high-performance ion chromatography. Analytes were collected offline and sent to high-performance ion chromatography for online measurements to realize accurate quantification of the content of rare earth elements [9].

Muhammad K suggested that the existing action recognition technology used the pre-training weights of different AI frameworks in the training stage to represent the video frames visually, which affected the confirmation of the difference in features. The results showed that the accuracy was improved by 1% to 3% over other baseline model [10]. Zhu reported that action recognition methods based on CNN and long short-term memory were limited in exploring spatiotemporal relationship information and proposed a novel end-to-end spatiotemporal model. The experimental results showed that the model outperformed long short-term memory recognition methods in many mainstream datasets [11]. Dong M et al. attempted to solve the problem that the complex parameters of 3D CNN make the model difficult to train and migrate. They improved the existing model by combining the residual structure and attention mechanism. Experimental results showed that the fusion structure improved the performance in many aspects than other single structures [12]. Aiming at the lack of quantitative scoring standards in taekwondo competitions, Lee J proposed a taekwondo unit technique human motion dataset, which contained 1936 motion samples performed by 10 experts and captured by two sets of camera views for eight-unit techniques. The effective recognition rate of the proposed model was as high as 95.833%, and the lowest accuracy rate was 74.49% [13]. Singh et al. proposed a deep learning model that combined multiple CNN streams to recognize human activities in video sequences. The proposed model outperformed existing mainstream methods [14].

Scholars at home and abroad have studied the online-offline blended teaching mode and CNN action recognition, but few studies have combined the two. In addition, the improvement of CNN in action recognition is mainly based on combining long and short-term memory and attention mechanism [15]. This research combines CNN action recognition and blended teaching mode to classify students’ learning status in time by recognizing their actions and expressions in class. A weight-offset module acting on the convolution layer is proposed to improve CNN. An emotional classroom evaluation index is proposed for Chinese painting education to enhance the teaching effect of Chinese painting in colleges and universities.

3. Emotional Blended Teaching Mode Based on Improved Convolutional Neural Network

3.1 Action Recognition Classification Algorithm Combining Weight Offset and Dimension Reduction Retrieval

Aiming at the problem that the standard CNN cannot understand dynamic information in the action recognition classification task, a weight-offset module is designed. This module assigns a higher weight parameter to the convolutional layer and uses this particular convolutional layer as other convolutional layers. The weight offset was carried out based on information difference because the neural network is different from the dynamic information of the original data after learning. The network ability was optimized to understand the time-domain dynamic information and improve the classification accuracy of the model [16]. Fig. 1 shows the framework of the weight offset module.

In Fig. 1, the convolutional layer first sorts the learning potential of each dynamic input data, then calculates similarity and assigns more learning potential to dynamic data. The gradient of the weight parameters of each convolutional layer is improved, and the direction of optimization is changed. The update of weight parameters was modified by adding time domain constraints to the loss function.

This procedure was repeated by optimizing and updating the dynamic data with the lowest weight into the convolutional layer. In contrast, the dynamic data with the highest weight is used as the output. Fig. 2 presents the evolution of the data dynamic information throughout the training procedure.

In Fig. 2, the squares represent the dynamic data of the convolutional neural network input layer and several intermediate layer data. A closer color of the squares indicated a higher similarity of dynamic data between layers. The convolutional layer closest to the dynamic data and input layer was the dominant layer. The module achieved the effect of optimizing training by reducing the difference in time-domain dynamic information between the convolutional layer and the input layer as a whole and improving the network model ability to understand time-domain information. The dynamic information set of the convolutional layer data was ${\delta}$. The dynamic information set of the original data $T^{c}$ of the input layer was $T$. The numerical difference was $d$. The calculation method is expressed as formula (1).

(1)
$ d=\geq \left| \delta \left(T\right)-\delta \left(T^{c}\right)\right| $

$\delta $ is the medicalization of the dynamic information set. The dynamic information set of each data layer was calculated by the difference in frame numbers to obtain more comprehensive dynamic information and prevent valid information from being excluded, as shown in Eqs. (2) and (3).

(2)
$\begin{align} \delta \left(T^{c}\right)&=\det \left(T^{c}+E\right) \end{align} $
(3)
$\begin{align} \delta \left(T\right)&=\det \left(T+E\right) \end{align} $

Through the dynamic information extraction of Eqs. (2) and (3), the overfitting of the network can be avoided owing to the high local attention of the weight-offset module during the optimization process. The dynamic information of the feature data of the convolutional layer can be calculated according to Eqs. (4) and (5).

(4)
$\begin{align} T^{c}&=\left\{t_{^{1}}^{c},t_{2}^{c},t_{3}^{c},\cdots ,t_{n-1}^{c}\right\} \end{align} $
(5)
$\begin{align} t_{i}^{c}&=u_{i+1}^{c}-u_{i}^{c},1\leq i\leq n-1 \end{align} $

$u^{c}$ is the feature data. $n$is the total number of frames in the $i$$^{\mathrm{th}}$ data group. $t$ is any frame in the data group. The calculation process of the original data action information set of the input layer is the same as the above method, so it is omitted. For the different number of data channels in the convolutional layer, this study samples the feature data randomly with a large number of channels in proportion to ensure that the distribution of each data group remains consistent. When updating the weight parameters, the weight offset module needs to quantify the distance between $T^{c}$ and $T$, and takes the quantized result as the time domain constraint of the neural network loss function. The combination of time domain constraints and loss functions can jointly affect the gradient of the network. The distance between the sum $T^{c}$and the sum $T$ is minimized while ensuring the accuracy of the optimized model. The time domain difference between the data of the weight offset module and the original data of the input layer can be reduced. Eqs. (6) and (7) express the loss function and the update operations of weight parameters after adding time domain constraints, respectively.

(6)
$\begin{align} L&=L_{p}\left(l,\hat{l}\right)+\lambda D\left(T,T^{c}\right) \end{align} $
(7)
$\begin{align} W&=\arg \min _{T^{c}}\left\{L_{p}\left(l,\hat{l}\right)+\lambda D\left(T,T^{c}\right)\right\} \end{align} $

$L_{p}\left(l,\hat{l}\right)$ is the loss function composed of the real value of the input sample and the output value of the model. $\lambda $is the constraint weight. $D\left(T,T^{c}\right)$ is the time domain constraint. $W$is the weight parameter of the convolutional layer. Because the original and feature data are not in the same data space, the common difference calculation method could not reproduce this cross-spatiality, resulting in inaccurate calculation results. Transfer learning was adopted for multi-domain adaptation, and the Maximum Mean Discrepancy (MMD) algorithm was used to quantify the distance of cross-spatial dynamic information accurately. The MMD algorithm maps the two datasets using a continuous function set in a certain sample space and judges whether the stones belong to the same space according to the difference in the mapping results. The selection of $F$ is the key to determining the accuracy of the calculation result. The definition of the MMD algorithm is shown in Eqs. (8) and (9).

(8)
$\begin{align} MMD\left[F,p,q\right]&=\underset{f\in F}{\sup }\left(E_{p}\left[f\left(x\right)\right]-E_{q}\left[f\left(y\right)\right]\right) \end{align} $
(9)
$\begin{align} MMD\left[F,X,Y\right]&=\underset{f\in F}{\sup }\left(\frac{1}{m}\sum _{i=1}^{m}f\left(x_{i}\right)-\frac{1}{n}\sum _{i=1}^{n}f\left(y_{i}\right)\right) \end{align} $

$p$ and $q$ are the Borel probability distributions. $X$ and $Y$ are the sum of the data set $f$ obtained by the independent and identical distribution of the Borel probability distribution, respectively. $m$ is a function $n$ of the function set $F$. The weight-offset module uses the MMD algorithm of the regenerated kernel Hilbert space to map the two sets of action information sets in different spaces into the Hilbert space with the kernel function. The distance between the two data groups is equivalent to the quantified differences between two groups in action information across spatial-temporal domains. The completeness and regularity of the Hilbert space can ensure the stability of the calculation results. The deviation was not too large in the case of a large amount of sample data. The mapping process of the regenerated kernel Hilbert space is expressed as Eqs. (10) and (11).

(10)
$\begin{align} f\left(x\right)&=\left\langle f\varphi \left(x\right)\right\rangle _{H} \end{align} $
(11)
$\begin{align} \varphi \left(x\right)&=\exp \left[-\left\| x-x'\right\| ^{2}/\left(2\sigma ^{2}\right)\right] \end{align} $

$\varphi $ is the data that is mapped with a Gaussian kernel. $H$ is the Hilbert space. $x'$is the transpose of the input data. $\sigma $ is the sample standard deviation. The dynamic data of the original data of the input layer and the dynamic data of the weight-offset layer are used as the input cross-spatial data. The maximum mean difference between the two sets of time-domain dynamic information is used as the constraint, as shown in Eq. (12).

(12)
$ D\left(T,T^{c}\right)=\frac{1}{n-1}\left\| \sum _{i=1}^{n-1}\varphi \left(t_{i}\right)-\sum _{i=1}^{n-1}\varphi \left(t_{i}^{c}\right)\right\| _{H}^{2} $

$T$ is the dynamic information set of the original data of the input layer. $T^{c}$ is the dynamic information set of the feature data of the weight-offset layer. The key frame of the $M$ image set is$I$, ${I_{{_{i}}}}_{i=1}^{M}$ is set as the input of the model. The number of image blocks is expressed as (13).

(13)
$ S=\left(n-k+1\right)\left(n-k+1\right) $

$n$ is the size of the image. $k$is the size of the convolution kernel. Each image, according to the column vector, is sorted. $X_{i}$ is the dimensional feature vector of any image in the image set $n$. The feature matrix for dimensionality reduction processing is extracted to form a new low-latitude feature library. The feature vector $F_{i}$ is extracted, and the similarity is calculated. The feature matrix of each row is converted as a vector. The similarity is judged by comparing the cosine angle between the vectors, which is expressed as formula (14).

(14)
$ h=\sum _{i=1}^{n}\left(x_{i}y_{i}\right)/\left(\sum _{i=1}^{n}\left(x_{i}\right)^{2}\right)^{\frac{1}{2}}\left(\sum _{i=1}^{n}\left(y_{i}\right)^{2}\right)^{\frac{1}{2}} $

$x_{i}$ and $y_{i}$ are two vector elements. $n$ is the number of vector elements. A closer value of the calculation result indicates that the two vectors are closer. Finally, the value range of the calculation result is normalized, and the normalized interval is expressed as Eq. (15).

(15)
$ S=0.5+0.5h $

$S$ is sorted by size and output of the search results. According to the similar metric of the feature vector and feature library of the image to be retrieved, the feature index is performed, the corresponding image is found, and the top-ranked image is displayed according to the decreasing law.

Fig. 1. Weight offset module framework.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig1.png
Fig. 2. Effect of weight offset layer on yes and dynamic information.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig2.png

3.2 An Online-offline Blended Teaching Model Integrating Deep Learning for Chinese Painting Education

Blended teaching refers to integrating multiple teaching modes and teaching methods, which are usually assisted by digital information technology to improve teachers’ teaching effects and students’ learning efficiency [17]. To a certain extent, Chinese painting education is more about cultivating students’ emotional identification with China’s traditional culture rather than painting skills and aesthetic cognition [18]. Fig. 3 shows the online-offline blended teaching mode of Chinese painting in colleges and universities.

The blended teaching model is divided into three parts (Fig. 3). The first part analyzes teaching needs, the second part is online learning, and the third part is an online classroom. According to the Chinese painting learning environment and resources in colleges and universities, students' psychological characteristics, learning needs, and content must be compared and analyzed. Students' emotional needs in learning Chinese painting are the key part. Through online learning methods, including micro-video learning and collaborative learning online discussion, students can have emotional expression and accumulation. Teachers can leave questions and role-playing activities after online courses so that students can summarize and reflect on themselves and maintain emotional continuity. Finally, the results exchange, debate, and discussion are conducted in offline classes. The primary purpose is to facilitate the generation and expression of new emotions and evaluate the learning content. This part runs through the whole learning process instead of being placed only in the third part. Learning emotional Chinese painting courses requires students to master the basic knowledge of Chinese painting. It cultivates students’ understanding of the traditional culture contained in Chinese painting and enhances their emotional identification with Chinese traditional culture [19]. This model mainly uses qualitative research and quantitative research methods to conduct a fine-grained analysis of students’ reflective experiences by evaluating the students’ reflection after class. Fig. 4 shows the flow of the blended learning mode integrating deep learning.

The teaching process is not a one-way process but a cyclic process, as shown in Fig. 4. Through reflection and summary, teachers can analyze students’ emotions reflected by students’ movements and expressions during class through the CNN model and adjust teaching methods and content in time. In this process, the function of the weight offset module in CNN is to pre-set teaching objectives, in which CNN focuses on monitoring the actions and expressions related to the target and assigns higher weights to related images and videos, which is more accurate. Finally, whether students meet the established learning goals in the classroom is determined.

Fig. 3. The mixed learning mode of Chinese painting in colleges and universities.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig3.png
Fig. 4. Blended teaching model integrating deep learning.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig4.png

4. Training of CNN Action Recognition Classification Algorithm Based on Residual Network

Because the research objects of this study are students in the classroom, it is not appropriate to select images or videos of street scenes with complex backgrounds or athletes with large body movements as training sets. Therefore, the BBC pose dataset is selected. This dataset is an open-source dataset containing 20 videos with a length of 30 to 90 minutes, recorded by the BBC and equipped with sign language translation. This study includes dozens of subtle body movements and rich expressions. The training base network is a residual network combined with a 3D convolution kernel. The comparison model is the standard CNN, Mask R-CNN, the proposed model, and the number of training is 500 times. Fig. 5 presents the training results.

In Fig. 5(a), the convergence speed of the unoptimized CNN model is slow, which tends to converge after approximately 250 iterations, and soon falls into the local optimal solution. After convergence, the error rate remained higher than 10%, indicating unsatisfactory training results. The Mask-R-CNN model showed a fast convergence speed because of its unique lightweight structure (Fig. 5(b)). The model tended to converge after approximately 100 iterations and remained stable after convergence. The low number of training times shows that the module speeds up the convergence process of network training. In Fig. 5(c), the proposed (Ours-CNN) model tended to converge after approximately 150 iterations and remained stable after convergence, with an error rate of approximately 5%. Although the iteration speed was not as fast as Mask-R-CNN, the classification tasks are based on real-world situations. Approximately 30 to 60 people are in a class, and not too many samples need to be analyzed. Therefore, the accuracy index is preferred to the convergence speed index, and the Ours-CNN model is better than the other models assessed. One hundred images cropped from the training set were used as the test set to compare the performance of the three models further. The recall rate and average precision are used as evaluation indicators. The test results are shown in Fig. 6.

In Fig. 6(a), the initial recall rates of the three models are similar. As the number of pictures increased, the recall rates all showed a gradual upward trend. Among them, the Mask-R-CNN model relied on its lightweight structure and fast convergence speed, and the recall rate increased rapidly when the number of pictures reached 20. The recall rate was close to 80% when the number of pictures reached 60, finally approaching 90%. The rise of CNN was relatively stable, and the final recall rate was 74%. When the number of pictures reached 30, the CNN recall rate increased rapidly. The recall rate was 70%, which is lower than that of Mask-R-CNN when the number of pictures was 60. When the number of pictures was 75, the final recall rate was as high as 95%. This model was 15% and 21% higher than Mask-R-CNN and CNN, respectively. In Fig. 6(b), the average precision of all three models decreased as the number of pictures increased. The CNN model decreased rapidly when the number of pictures was less than 30 and then decreased slowly after the number of pictures exceeded 30. The final average precision rate was approximately 40%. The Mask-R-CNN model could maintain an average precision of approximately 95% when the number of pictures was less than 30. The accuracy rate was 45% when the number of pictures reached 60. The average precision was only 16% when the number of pictures exceeded 90. The Ours-CNN model always maintained a high average precision rate, and the speed of decline was more significant when the number of pictures exceeded 80. The final average precision rate was 72%, which was 32% higher than that of the CNN model and 56% higher than that of the Mask-R-CNN model.

Fig. 5. Comparison of the error rates of three models.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig5.png
Fig. 6. Recall and average precision.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig6.png

4.1 Practical Application Analysis of Online-offline Blended Learning Mode Integrating Deep Learning

An indirect assessment is more important because the sentiment toward traditional culture occurs at a higher level of sentiment classification (Huang D et al. 2020) [20]. Learning indicators, student behavior, personal reflection experience, and theoretical achievements are collected through observation. Content analysis method is used for in-depth analysis. In a university, students in Chinese painting elective courses who volunteered to participate in the experiment are selected as research subjects. The experimental group adopts an emotional online-offline blended learning model integrating deep learning, while the control group adopts the traditional classroom-teaching model. Among them, there are 41 people in the experimental group, including 29 males and 12 females. There were 47 people in the control group, including 21 males and 26 females. The indicators examined included theoretical knowledge, emotional development, and learning attitude. Among them, theoretical knowledge included classroom questions, homework, half-term exams, and final exams. Emotional development involves cultural development, moral development, and aesthetic development. The learning attitudes include attendance, class concentration, and online or offline self-learning content completion. Fig. 7 shows the scores of the two groups of members.

The average score differences between the control and experimental groups on theoretical knowledge and learning attitude are insignificant (Fig. 7), with ${-}$0.19 and 0.46 respectively. On the other hand, the differences in standard deviation are ${-}$1.10 and -2.78, respectively. It shows that the control group members deviate more from the mean value. Table 1 lists the test of the three indicators in the two groups.

The theoretical knowledge score was $F=4.03$ $\left(P=0.047<0.05\right)$ (Table 1), reaching the significance level of 0.05. Under the opposite hypothesis, $H1\colon \sigma _{\times 1}^{2}\neq \sigma _{\times 2}^{2}$, the statistical value $t$ was 3.21. The value of the probability of significance is$p=0.001<0.01$. The difference was quite significant. Regarding theoretical knowledge, the students in the control group have significant individual differences in their mastery of theoretical knowledge. The trend in the learning attitude index is the same as that of the theoretical knowledge index, i.e., the average score was similar. The sum of the standard deviation $F$ is larger, indicating that the individual differences in the learning attitude of the students in the control group are also obvious. In the sentiment index, the mean of the control group was 1.12; the standard deviation was 1.33; the mean absolute error was 0.201. The mean of the experimental group was 2.76; the standard deviation was 2.38; the mean absolute error was 0.383. The average score of the experimental group was higher than that of the control group, but the individual differences were also greater than that of the control group. The subordinate indicators were evaluated for further explanation, as shown in Table 2.

As shown in Table 2, the average number of words in the test group was 1312.72, and that of the control group was 1016.43, a 29.15% increase. The appreciation index of the test group was 5.73, and that of the control group was 5.11, a 12.13% increase. The moral development index of the test group was 6.29, and that of the control group was 5.62, an 11.92% increase. The social development index of the test group was 4.14, and the social development index of the control group was 1.97, a 113.19% increase. The inspirational development index of the test group was 4.28, and that of the control group was 3.67, an increase of 16.62%. Overall, the two groups have little difference in appreciation and moral development, but there are large differences in social and inspirational development.

Fig. 7. Scores of the experimental group and the control group on each indicator.}
../../Resources/ieie/IEIESPC.2023.12.3.243/fig7.png
Table 1. Scores of the experimental and control groups on each index t-test.

Evaluation indicators

Levene's test for equality of variance

T-test for equality of means

F

Salience

t

Salience

Average difference

Theoretical knowledge

Hierarchical variation

4.03

0.047

3.21

0.001

0.682

Grade variation is not used

/

/

3.25

0.001

0.526

Learning attitude

Hierarchical variation

7.15

0.006

3.84

0.000

0.757

Grade variation is not used

/

/

3.93

0.000

0.361

Emotional development

Hierarchical variation

12.87

0.001

-4.02

0.000

0.210

Grade variation is not used

/

/

-3.88

0.000

0.383

Table 2. Statistics of students’ emotional evaluation indicators.

Group

Average word count

Appreciation

Moral development

Social development

Motivational development

Test group

1312.72 (317.47)

5.73 (1.82)

6.29 (1.01)

4.14 (0.75)

4.28 (0.03)

Control group

1016.43 (202.12)

5.11 (2.01)

5.62 (0.98)

1.97 (1.36)

3.67 (0.11)

5. Conclusion

This research proposed a CNN network optimized by a weight shift module and a dimensionality reduction retrieval method for the recognition and classification of students’ behavior in the online-offline blended teaching mode. The BBC pose dataset was used as the training set and test set of the CNN network. Compared with the comparison model, the CNN model proposed in this study has a faster convergence rate and a 5% error rate. The recall rate was increased by 15% and 21%, and the average precision rate was increased by 32% and 56%. Compared with the traditional teaching mode, the designed emotional blended teaching method integrating deep learning has improved the emotional evaluation index by 110.15% and 16.62%, which is a significant improvement. The disadvantage of this study was that it did not explore the influence of the gender, age, and profession of the experimental subjects on the experimental results. Therefore, follow-up research is needed to explore this issue further.

6. Fundings

The research is supported by: 1. Research project of Teaching Reform in Colleges and universities of Hunan Province: Research and Practice of mixed teaching reform of Chinese figure painting course in Fine arts major of Normal University in education 4.0 era (No. HNJG-2021-0233); 2. Philosophy and Social Science Foundation of Hunan Province: Study of Wang Fuzhi’s Theory of “Divinity-Truth” and traditional Painting Aesthetics, (No. 18YBQ034); 3. Research project of Teaching Reform in Colleges and universities of Hunan Province Research on blended teaching of printmaking basic Course in the Era of “Internet +” (No. HNJG-2020-1106).

REFERENCES

1 
X. Lv, M. Li, “Application and Research of the Intelligent Management System Based on Internet of Things Technology in the Era of Big Data”. Mobile Information Systems 2021, 1 (16), pp. 1-6.DOI
2 
C. Wu, Y. Li, “Reflections on Traditional Chinese Medicine Culture Teaching in the Context of International Chinese Education”. Journal of Contemporary Education Research, 2022, 6(5), pp. 102-107.DOI
3 
K. Artman-Meeker, A. Fettig, J. E. Cunningham, et al. “Iterative Design and Pilot Implementation of a Tiered Coaching Model to Support Socio-Emotional Teaching Practices”. Topics in Early Childhood Special Education, 2022, 42(2), pp. 124-136.DOI
4 
X. He, H. Yang, “Exploration on Emergency Online Teaching Mode of Russian Major”, Open Access Library Journal, 2021, 8(10), pp. 1- 9.DOI
5 
T. Aguirre, L. Aperribai, L. Cortabarría, et al. “Challenges for Teachers’ and Students’ Digital Abilities: A Mixed Methods Design Study”, Sustainability, 2022, 14 (8), pp. 1-9.DOI
6 
R. Gao, J. Lloyd, Y. Kim, “A Desirable Combination for Undergraduate Chemistry Laboratories: Face-to-Face Teaching with Computer-Aided, Modifiable Program for Grading and Assessment”, Journal of Chemical Education, 2020, 97(9), pp. 3028-3032.DOI
7 
Garrich M., J. L. Romero-Gázquez, F. J. Moreno-Muro, et al. “IT and Multi-layer Online Resource Allocation and Offline Planning in Metropolitan Networks”, Journal of Lightwave Technology, 2020, 38(12), pp. 3190-3199.DOI
8 
S. Carter, M. Herold, I. Jonckheere, et al. “Capacity Development for Use of Remote Sensing for REDD+ MRV Using Online and Offline Activities: Impacts and Lessons Learned”, Remote Sensing, 2021, 13(11), pp. 2172 -2185.DOI
9 
V. C. Bradley, B. T. Manard, B. D. Roach, et al. “Rare Earth Element Determination in Uranium Ore Concentrates Using Online and Offline Chromatography Coupled to ICP-MS”, Minerals, 2020, 10(1), pp. 55 -66.DOI
10 
K. Muhammad, Mustaqeem, A. Ullah, et al. “Human action recognition using attention-based LSTM network with dilated CNN features”, Future Generation Computer Systems, 2021, 125, pp. 820-830.DOI
11 
A. Zhu, Q. Y. Wu, R. Cui, et al. “Exploring a Rich Spatial-temporal Dependent Relational Model for Skeleton-based Action Recognition by Bidirectional LSTM-CNN”. Neurocomputing, 2020, 414(5), pp. 90-100.DOI
12 
M. Dong, Z. Fang, Y. Li, et al. “AR3D: Attention Residual 3D Network for Human Action Recognition”, Sensors, 2021, 21(5), pp. 1656 -1670.DOI
13 
J. Lee, H. Jung, “TUHAD: Taekwondo Unit Technique Human Action Dataset with Key Frame-Based CNN Action Recognition”. Sensors, 2020, 20(17), pp. 4871-4891.DOI
14 
R. Singh, R. Khurana, A. Kushwaha, et al. “Combining CNN streams of dynamic image and depth data for action recognition”, Multimedia Systems, 2020, 26(5), pp. 313-322.DOI
15 
K. Washida, N. Matsui, M. Shoji, et al. “Long-term changes of psychological factors regarding exercise in patients with type 2 diabetes after discharge and the effect of these changes on glycemic control”, Journal of physical therapy science, 2021, 33(12), pp. 898-902.DOI
16 
A. Plessas, S. Cowie, D. Parry, et al. “Machine learning with a snapshot of data: Spiking neural network ‘predicts’ reinforcement histories of pigeons’ choice behavior”. Journal of the Experimental Analysis of Behavior, 2022, 117(3), pp. 301-319.DOI
17 
X. Wu, “Research on the Reform of Ideological and Political Teaching Evaluation Method of College English Course Based on “Online and Offline” Teaching”, Journal of Higher Education Research, 2022, 3(1), pp. 87-90.URL
18 
Q. Zhang, “Understanding the Profound Power of Traditional Chinese Culture Through Landscape Painting”, QiuShi, 2021, 13(5), pp. 73-80.URL
19 
A. Finisguerra, L. F. Ticini, L. P. Kirsch, et al. “Dissociating embodiment and emotional reactivity in motor responses to artworks”, Cognition, 2021, 212(9), pp. 1-34.DOI
20 
D. Huang, Y. Huang, N. Adams, et al. “Twitter-Characterized Sentiment Towards Racial/Ethnic Minorities and Cardiovascular Disease (CVD) Outcomes”, Journal of Racial and Ethnic Health Disparities, 2020, 7(2), pp. 888-900.DOI

Author

Yuanyuan Tan
../../Resources/ieie/IEIESPC.2023.12.3.243/au1.png

Yuanyuan Tan, February 20, 1990, female, Associate Professor, doctor. Bachelor graduated from Hunan Normal University, majoring in Fine Arts in 2011, master graduated from Hunan Normal University, majoring in Fine Arts in 2014, and doctor graduated from Hunan Normal University, majoring in Fine Arts in 2018. She is now working in Hunan First Normal University, Associate Professor,the research direction is fine arts. She has published 10 academic articles and presided over 6 scientific research projects.

Ge Yi
../../Resources/ieie/IEIESPC.2023.12.3.243/au2.png

Ge Yi, May 6, 1982, female, lecturer, master. Bachelor graduated from Sichuan Academy of Fine Arts in 2004, majoring in printmaking, and master graduated from Sichuan Academy of Fine Arts in 2009, majoring in software engineering. She is now working in Hunan First Normal University, lecturer,the research direction is fine arts. She has published 4 articles and participated in 4 scientific research projects.