Mobile QR Code QR CODE

2024

Acceptance Ratio

21%


  1. (Fine Arts College, Guangxi Arts University, Nanning 530000, China)
  2. (School of Automotive and Information Engineering, Guangxi Eco-engineering Vocational and Technical College, Liuzhou 545000, China)



Density peak clustering, Color reconstruction, Peak signal-to-noise ratio, Structural similarity, Feature extraction, Color design

1. Introduction

The application of images permeates many fields in people’s daily lives, especially in graphic color design. Color design is an important part of graphic design, which can express the author’s emotions and convey their thoughts, arousing emotional resonance among the audience. Moreover, color design becomes an important visual recognition element in graphic design, which can enhance the audience’s first impression of images or works [1,2]. Traditional flat color design methods heavily rely on time-consuming manual processes, which make it difficult to meet large-scale, long-term design needs. In contrast, image reconstruction technology can achieve diversified processing of image colors by transforming color information, thereby providing more design ideas and possibilities for graphic design [3,4]. Therefore, image color reconstruction technology is widely applied in the design field. Through image color reconstruction technology, images are edited to achieve the conversion of different colors in product images, thereby meeting more market demands. Therefore, an image reconstruction algorithm combining Fabric Color Extraction (FCE) has been proposed, aiming to effectively improve the effectiveness of existing FCE and image color reconstruction methods to meet the diverse needs of flat color design. The contribution of the research lies in the combination of bilateral filtering and Density Peak Clustering (DPC) techniques, which helps to solve the problem of poor performance of traditional color extraction methods when dealing with fabrics with complex textures and patterns. The research structure consists of four parts. Firstly, the research on image reconstruction and image design by scholars is summarized and the research results are analyzed. Secondly, the research method is constructed and analyzed, and improvement methods for FCE are introduced. Then, the proposed algorithm’s performance is verified through comparative experiments. Finally, a summary of these experimental results is made, the shortcomings in the research are pointed out, and future research directions are proposed.

2. Related Works

In flat design, color can become the main recognition feature of flat works. To accurately identify flat works, many scholars have researched image reconstruction and color feature extraction. Jijun et al. applied tomographic gamma scanning technology to reconstruct images in non-destructive analysis. Four image reconstruction algorithms were analyzed in this system and three radiation source distribution models were established. These experiments confirmed that the Maximum Likelihood-Expectation Maximization (ML-EM) algorithm had the best performance in image reconstruction using fault gamma scanning technology [5]. Jeyaraj et al. established a dynamic image reconstruction framework using deep learning algorithms in image reconstruction systems. This framework integrated advanced features from deep convolutional neural networks to identify inherent deep features in image patches. The proposed image reconstruction framework was validated in this study. These experiments confirmed that the proposed framework had the shortest training time and higher Structural Similarity (SSIM) [6]. Jianxin et al. proposed a color extraction means for multi-color yarns using a hyperspectral imaging system for FCE. K-means was used to cluster over segmented regions. These experiments confirmed that the proposed method improved execution efficiency by 55% and achieved the required accuracy for color measurement [7]. Liu et al. proposed an image-processing reconstruction method for random fiber networks. This method combined fiber parameters to establish an image reconstruction model and discussed the model’s accuracy. The study analyzed the fiber diameter and quantity and investigated their limited impact on the fiber separator. These simulation experiments confirmed that the proposed method had small errors and was feasible, which was an effective and reliable modeling method [8].

Many scholars have conducted extensive research on flat color design. Zhao et al. proposed an image color adjustment method in flat design to achieve locality and naturalness of image colors. This method was based on unsupervised learning and divided the color adjustment process into two stages: modifying region selection and target color propagation. These experiments confirmed that the proposed method effectively achieved image re-coloring [9]. Zhang et al. proposed an improved color transfer method in image design. This method selected a color chart as the target image, adjusted the brightness of the image, and matched it with the reference image. These experiments confirmed that the proposed method could be applied to the appearance of colored spun fabrics, improve usage efficiency, and save labor costs [10]. Zhao et al. proposed a multi-objective kernel intuitionistic fuzzy clustering algorithm in color design. A semi-supervised kernel intuitionistic fuzzy objective function was constructed and optimized. These experiments confirmed that this method had superior segmentation performance and lower time cost compared to other methods [11]. Wu et al. proposed a color image segmentation method based on convex K-means in image design. The variational model was solved using the Champolle-Lock algorithm and simplex projection. These experiments confirmed that the single-stage strategy used in this research effectively improved the mage segmentation’s accuracy. This proposed method had strong effectiveness and robustness [12].

In summary, the above research utilizes different technologies, including deep learning, hyperspectral imaging, etc., to improve color accuracy and naturalness, which achieves certain results. However, the above methods still have certain limitations, such as high computational costs, low time efficiency in utilization, and a lack of certain generalization ability in the model. To overcome the difficulties in color image processing mentioned above, a planar color design method combining Image Color Reconstruction Algorithm (ICRA) and FCE is proposed in this study by integrating multiple color spaces and user interactive design.

3. Methods

The color characteristics of a fabric are of significant importance in determining its visual effect and market value. As a result, it is of paramount importance to ensure the accurate extraction of fabric color in the context of design and production. Traditional color extraction methods often have limitations when dealing with complex textures and diverse patterns. A novel FCE algorithm has been proposed, which combines bilateral filtering and DPC techniques. This method not only improves the accuracy of color extraction but also considers the complexity of fabric texture, making the extracted colors more reflective of the actual characteristics of the fabric. The study first constructs a fabric color feature extraction model. The integrity of fabric image contours is ensured through bilateral filtering processing. Then the DPC is used for color clustering analysis. Next, a matching Color Table (CB) is constructed and the effect of the color matching table is quantified, enabling users to complete interactive color selection.

Fig. 1. Color reconstruction algorithm flow.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig1.png

3.1. Extraction of Fabric Colors

As an important carrier of information transmission, images enable people to perceive directly through vision. Color space, as a different representation of image color structure, is divided into hardware oriented and object oriented types based on the different objects. The foundation of color information research is color space, which quantifies people’s subjective perception of color into specific numerical values and provides a basis for color expression [13,14]. However, the choice of color space can affect the accuracy of image color reconstruction. The color space selected for image color reconstruction must have both independence and uniformity. Independence requires that the components of the color space do not affect each other, and changes in one component will not cause changes in other components [15,16]. Uniformity requires that the equal values of each component in the color space can produce approximately consistent visual changes. There are multiple ways to express color space, and existing color spaces include Red- Green-Blue (RGB), International Commission on Illumination Color Space (CIE-lab and Lab), and Hue-Saturation-Value (HSV), etc. [17]. Fig. 1 shows the ICRA for implementing graphic design research.

Fig. 1 shows the flowchart of the color reconstruction algorithm, which mainly includes FCE, color interaction selection, matching color generation, and multi-objective collaborative color transfer. For fabrics, conventional color extraction methods are not applicable. Most fabrics have complex organizational structures, yarn textures, and pattern patterns on their surfaces, which affect subsequent color extraction. To eliminate the influence of organizational structure and yarn texture structure in color extraction, an FCE method based on texture filtering and DPC is used. Firstly, clear fabric texture images are input, and then denoising and smoothing are performed using a bilateral filtering algorithm. In fabric image acquisition, image signals may be contaminated by noise due to environmental and optical sensing instrument limitations. Fabric images are preprocessed using bilateral filtering algorithms. Bilateral filtering, as a non-linear filtering technique, is similar to Gaussian filtering in that its middle pixel value is replaced by the weighted average of pixels within the window. Bilateral filtering is improved based on Gaussian filtering, taking into account not only the geometric differences of pixels but also the differences between each pixel and the center pixel value. For points with significant differences in pixel values compared to neutral points, smaller weights are assigned. The operation of Gaussian filtering is represented by formula (1).

(1)
$\left\{\begin{aligned} I_{gf} (p)=\frac{1}{N_{p} } \sum _{q\in S(p)}\exp \left(\!-\!\frac{(x_{p} -x_{q} )^{2} +(y_{p} -y_{q} )^{2} }{2\sigma _{s}^{2} } \right)\\ \times I(p), \\ N_{p} =\sum _{q\in S(p)}\exp \left(-\frac{(x_{p} -x_{y} )^{2} +(y_{p} -y_{q} )^{2} }{2\sigma _{s}^{2} } \right). \end{aligned}\right. $

In formula (1), $I_{gf}(p)$ refers to the Gaussian filtering operation. $S(p)$ stands for a matrix window centered on point $p$, with a size of $k \times k$. $x_p$ and $y_p$ refer to the horizontal and vertical coordinates of $p$, respectively. $x_q$ and $y_q$ stand for the horizontal and vertical coordinates of point $q$, respectively. $N_p$ stands for a standardized output value. $I_{gf}(p)$ refers to the pixel value output after $p$ Gaussian filtering. $\sigma_s$ refers to the Gaussian kernel function’s bandwidth constant in a spatial domain. The operation of bilateral filtering is represented by formula (2).

(2)
$\left\{\begin{aligned} I_{bf} (p)=\frac{1}{W_{p} } \exp \left(\sum _{q\in S(p)}-\frac{(x_{p} -x_{q} )^{2} +(y_{p} -y_{q} )^{2} }{2\sigma _{s}^{2} } \right)\\ \times \exp \left(\frac{-\left\| I(p)-I(q)\right\| ^{2} }{2\sigma _{r}^{2} } \right)I(p), \\ W_{p} =\exp \left(-\frac{(x_{p} -x_{q} )^{2} +(y_{p} -y_{q} )^{2} }{2\sigma _{s}^{2} } \right)\\ \times \exp \left(\frac{-\left\| I(p)-I(q)\right\| ^{2} }{2\sigma _{r}^{2} } \right). \end{aligned}\right. $

In formula (2), $I_{bf}(p)$ refers to the pixel value after $p$ filtering output. $W_p$ refers to the standardized output value. $\sigma_r$ refers to the Gaussian kernel function’s bandwidth constant in a color domain. The fabric image processed by a bilateral filter needs to be smoothed by a rolling guide filter to smooth the texture of the fabric organization. Rolling guided filters can ensure the accuracy of pattern edge structures while removing image textures. The process of rolling guided filtering is represented by formula (3).

(3)
$\left\{\begin{aligned} G(p)=\frac{1}{N_{p} } \sum _{q\in S(p)}\exp \left(-\frac{(x_{p} -x_{q} )^{2} +(y_{p} -y_{q} )^{2} }{2\sigma _{s}^{2} } \right)\\ \times I_{bf} (p), \\ N_{p} =\sum _{q\in S(p)}\exp \left(-\frac{(x_{p} -x_{y} )^{2} +(y_{p} -y_{q} )^{2} }{2\sigma _{s}^{2} } \right). \end{aligned}\right. $

In formula (3), $I_{bf}(p)$ refers to the pixel value output at point $p$ after Gaussian blur. Then, by using the joint bilateral filtering formula, the blurred contour edges are restored through iterative calculation in Fig. 2.

Fig. 2 shows the texture removal and edge restoration process of rolling guided filtering. After Gaussian blur, the edge information of the image is removed. After multiple iterations, the blurry edges gradually become clear. However, the lost edge information in the image cannot be restored, resulting in a lack of complete details in the final iteration result. To ensure the integrity of the fabric contour, a calculation method that can effectively distinguish the texture of the fabric organization from the edges of the pattern is used. The Sobel operator is used to calculate the pixel gradient of the preprocessed fabric image, represented by formulas (4) and (5) [18].

(4)
$F_{x} =\left[\!\!\begin{array}{ccc} {1}& {0} & {-1} \\ {2}& {0} & {-2} \\ {1} & {2} & {-1} \end{array}\!\!\right]\otimes I_{bf} .$

Fig. 2. Rolling guided filter texture removal and edge restoration process.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig2.png

In formula (4), $F_x$ refers to the gradient value in the horizontal direction. $\otimes$ refers to the convolution operator. The vertical gradient is represented by formula (5).

(5)
$F_{y} =\left[\!\!\begin{array}{ccc} {1} & {2} & {1} \\ {0} & {0} & {0} \\ {-1} & {-2} & {-1} \end{array}\!\!\right]\otimes I_{bf} $

In formula (5), $F_y$ refers to the gradient value in the vertical direction. The Sobel operator has two sets of $3 \times 3$ matrices. The horizontal and vertical templates are convolved with the image in a plane to obtain approximate brightness gradients for the horizontal and vertical directions. Then, the gradient sum of all pixels within the sliding window is calculated using formula (6).

In formula (6), $T_x(p)$ and $T_y(p)$ refer to the gradient values of $p$ in the $x$ and $y$ directions, respectively. $K_x$ and $K_y$ refer to standardized outputs. $T(p)$ refers to the gradient value of the region centered on point $p$. The larger the $T(p)$, the greater the likelihood that point $p$ is the contour edge point. $\theta_p$ refers to the direction of the gradient in the region at point $p$, with a value range of $[-90^\circ, 90^\circ]$. After clarifying the texture of the fabric, DPC is used for image color clustering. Formulas (7) to (10) are the steps for DPC.

(6)
$\left\{\begin{aligned} T_{x} (p)=\left|\frac{1}{K_{x} } \sum _{q\in Np}F_{x} (q) \right|,\\ T_{y} (p)=\left|\frac{1}{K_{y} } \sum _{q\in Np}F_{y} (q) \right|,\\ T(p)=\sqrt[{2}]{T_{x} (p)^{2} +T_{y} (p)^{2} },\\ \theta _{p} =\arctan \left(\frac{T_{x} (p)}{T_{y} (p)} \right), \end{aligned}\right. $
(7)
$\left\{\begin{aligned} \rho _{i} =\sum _{j=1}^{N}\eta (d_{ij} -d_{c} ), \\ \eta (x)=\left\{\begin{aligned} 1, {x<0}, \\ 0, {x\ge 0}. \end{aligned}\right. \end{aligned}\right.$

In formula (7), $\rho_i$ refers to the estimated value of pixel $i$. $N$ refers to the total pixels. $d_{ij}$ refers to the color difference value between pixel $i$ and $j$. $\eta$ refers to the quantity of points where $d_c$ is greater than point $i$’s color difference value. $d_c$ represents the distance between the center point, which is the distance between each color feature point in the fabric sample and its cluster center. This parameter is used to determine the density of points and select appropriate clustering centers for clustering. This density estimation method is relatively rough. To obtain more accurate estimates, the Gaussian kernel density estimation method is used, represented by formula (8).

(8)
$\rho _{i} =\frac{1}{d_{c}^{3} (2\pi )^{3/2} } \sum _{j=1}^{N}\exp \left(-\frac{1}{2} \left(\frac{d_{ij} }{d_{c} } \right)^{2} \right). $

Then, the minimum color difference $\delta_i$ between points with higher density and pixel point $i$ is calculated using formula (9).

(9)
$\delta _{i} ={\mathop{\min }\limits_{j;\rho _{j} >\rho _{i} }} (d_{ij} ) .$

For the point having the highest density among all pixels, $\delta_i$ is the maximum color difference with other pixels, represented by formula (10).

(10)
$\delta _{i} ={\mathop{\max }\limits_{j\in [1:M]}} (d_{ij} ) .$

In formula (10), $M$ represents the total number of neighboring points or feature points involved in the calculation. After the above calculations, a decision map is drawn and cluster centers are selected. Then, the distance from each point to each cluster center is calculated, and each point is classified as the closest cluster center to it. The FCE method proposed in this study is used for color extraction of silk fabrics. Fig. 3 shows the extraction effect.

Fig. 3. Silk fabric pattern edge extraction process.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig3.png

Fig. 3 shows the edge extraction of silk fabric patterns. Fig. 3(a) shows the original image of silk fabric. Fig. 3(b) presents the extraction results of the Sobel operator. Fig. 3(c) shows the regional gradient information. Fig. 3(d) shows the fabric image after texture elimination. From this image, the original silk fabric image is transformed into a texture image with accurate edges.

3.2. Generation of Matching Color Tables

After FCE, a matching CB is generated. The generation of this table comprehensively considers the limitations of user interaction and visual differences, which is a combinatorial optimization problem. The generation of matching CB in image color transfer requires certain constraints to be met, therefore it has a high degree of randomness. In the generation of matching CB, the directed migration is solved using interactive color selection, such as assigning color values to a certain area of a graphic design. Color is selected based on the target sub-image of the target design image and the relevant areas of the template image. After selecting the template and target images’ colors, it is necessary to reconstruct these image colors based on the color structure relationship. Fig. 4 shows the color structure relationship.

Fig. 4. Color structure relation.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig4.png

Fig. 4 is a schematic diagram of the color structure relationship. Visual differences also have a significant impact on the generation of matching CB. It is necessary to control the visual difference of the target image CB within a certain range and maintain the contrast matrix’s similarity to the maximum extent. In Fig. 4, L1 represents the unprocessed color chart, L2 represents controlling visual differences, and L3 represents the processed color chart. If this visual difference of CB is greater than its proportion difference, Z-score standardization is required, represented by formula (11).

(11)
$c^{*} =\frac{c-\mu }{\delta } . $

In formula (11), $c^*$ refers to the standardized value. $c$ represents the original value associated with the standardization process, where the original value is used to calculate color differences during the standardization process. $\mu$ refers to the average value of all sample data. $\delta$ refers to the standard deviation of all sample data. After standardizing all sample data, the root Mean Square Error (MSE) $E_c$ is represented by formula (12).

(12)
$E_{c} =\sqrt{\frac{\sum _{i=1}^{n}\left\| c^{*} \right\| }{n} } .$

In formula (12), $n$ refers to the quantity of samples. However, the overall effect of CB cannot be considered from the comparison of CB. The visual difference between these color points added to CB and CB’s overall mean can be comprehensively considered. The overall CB contrast is represented by formula (13).

(13)
$E_{ca} =E_{c1} +E_{c2} .$

In formula (13), $E_{c1}$ refers to the mean of CB and CB’s difference. $E_{c2}$ refers to color points’ contrast difference in CB. $E_{ca}$ refers to the overall CB contrast. The matching effect of CB is quantified using formula (14).

(14)
$E=E_{p} +E_{ca} .$

In formula (14), $E$ refers to the energy value that quantifies the matching effect of CB. Based on the target CB, continuous iterative optimization of CB selection is carried out. The background color in the design is relatively single, and the brightness needs to be adjusted to adapt to the overall effect. The brightness value adjustment function is represented by formula (15).

(15)
$\frac{It_{l} -Ct_{l} }{Pt_{l} -It_{l} } =\frac{Im_{l} -Cm_{l} }{Pm_{l} -Im_{l} }. $

In formula (15), $C_{tl}$ and $C_{ml}$ refer to CB’s average brightness. $I_{tl}$ and $I_{ml}$ refer to the average brightness of the target and template images. $P_{tl}$ refers to the original brightness of the target image. $P_{ml}$ indicates the brightness value adjusted after color matching. Fig. 5 shows the brightness adjustment process.

Fig. 5. Brightness adaptive process.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig5.png

Fig. 5 shows the brightness adaptive process. The target image can be adjusted based on the brightness of CB. After re-matching the colors, some information may be lost, which can be repaired by matching the missing pixels with boundary points.

4. Results and Discussion

An analysis is conducted on the proposed ICRA to verify its performance. The experiment is divided into two parts. Firstly, the FCE algorithm in ICRA is studied. Secondly, the proposed ICRA is studied.

4.1. Performance Analysis of Fabric Color Extraction Algorithm

The study has established selection criteria to ensure that the selected samples are representative. A total of 100 different types of fabrics are selected, and the samples are sourced from various types of fabrics, including silk, cotton, and synthetic fibers. Various fabric samples are collected from local textile markets and online fabric suppliers to ensure that these samples cover multiple colors, textures, and constructions. Samples with clear fabric texture and rich colors are selected from the preliminary collection. These samples have been evaluated by professional designers to ensure their diversity in color and texture. The selected fabric samples are classified by fabric type and color for subsequent experimental analysis. The FCE algorithm is conducted in the laboratory. The study selects Mean Pixel Accuracy (MPA) and Mean Intersection over Union (MIoU) as the test indicators. Table 1 shows the laboratory environment settings.

Table 1. Laboratory environment setting.

CPU Intel(R)Core i7-7700 @ 3.6GHz
GPU GTX 1060
Operating system Ubuntu 18.04 LTS
CUDA 9.1
Python version 3.7

Fig. 6. Convergence result of DPC algorithm on data set.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig6.png

Table 1 shows the laboratory environment settings. The study selects 100 fabric texture images and divides them into training and testing sets based on a 7:3 ratio. On the training and testing sets, the iteration of DPC is set to 100 in Fig. 6.

Fig. 6 presents DPC’s convergence results on the training and testing sets. As the iteration increased, the loss value of DPC gradually converged. In the first 40 iterations, DPC quickly converged to 0.95 on the training set and 0.87 on the test set. After 100 iterations, DPC gradually converged to 0.36 on the test set. The study compared DPC with other clustering algorithms. Fig. 7 shows the MPA results.

Fig. 7. MPA results of multiple algorithms.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig7.png

Fig. 7 shows the MPA results of multiple algorithms. Mean Shift is a density-based non-parametric clustering algorithm that assumes that datasets of different cluster classes conform to different probability density distributions. The sample points will eventually converge at the maximum local density. Points that converge to the same local maximum are considered members of the same cluster class. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a representative density-based clustering method [19]. This method defines a cluster as the minimal set of densely connected points, delineating areas through the application of a sufficiently high density threshold into clusters. Furthermore, it enables the discovery of clusters exhibiting disparate forms within a noisy spatial database. Fuzzy C-Means (FCM), as a clustering algorithm for fuzzy mathematical models, can handle irregular datasets and datasets of different sizes [20]. K-means is an iterative clustering analysis algorithm that can be used for color extraction and visualization of color data. DPC has a high MPA after multiple iterations, reaching 94.9%. The lower MPA values of Mean Shift and DBSCAN indicate that these two algorithms have poor pixel accuracy values. The above algorithms are tested using MIoU in Fig. 8.

Fig. 8. MIoU results of multiple algorithms.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig8.png

Fig. 8(a) presents the MIoU of Mean Shift, DPC, and DBSCAN. Fig. 8(b) shows the MIoU of FCM and K-means. The MIoU values of the above algorithms increased with iteration. The MIoU value of Mean Shift gradually stabilized at 90.1% after 100 iterations. After 100 iterations of FCM and K-means, the MIoU values were higher, at 90.8% and 90.7%, respectively. The MIoU value of DPC was the highest, reaching 91.1%. In summary, DPC had strong advantages over other algorithms in fabric color and texture extraction, which better met the needs of color reconstruction.

4.2. Performance Analysis of Image Color Reconstruction Algorithm

The proposed image color reconstruction method was tested using Mean Absolute Error (MAE), MSE, Peak Signal-to-noise Ratio (PSNR), and SSIM as evaluation metrics. The proposed ICRA was compared with other similar algorithms using U-Net and Generative Adversarial Networks (GAN). The results are shown in Fig. 9.

Fig. 9. Comparison results of MSE and MASE for various algorithms.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig9.png

Fig. 9 shows the MSE and MASE results of multiple algorithms. The Mean Shift Clustering algorithm is based on the gradient rise strategy of probability density function for clustering, which is intuitive, simple, and efficient. Mask Region-based Convolutional Neural Network (Mask R-CNN) is a deep learning algorithm used for instance segmentation and object detection. U-Net is a convolutional neural network used for image segmentation. GAN is widely used in image generation by learning the distribution of training data through generative and discriminative networks. The proposed algorithm had the lowest MAE and MSE, while Mask R-CNN had the highest MAE and MAE. The MSE and MAE of Mean Shift and U-Net were also higher, indicating that Mask R-CNN, Mean Shift, and U-Net had larger errors and poorer image reconstruction results. Fig. 10 shows the SSIM and PSNR of various algorithms.

Fig. 10. Results of comparison between SSIM and PSNR of various algorithms.

../../Resources/ieie/IEIESPC.2025.14.6.741/fig10.png

Fig. 10 (a) shows the SSIM results of multiple algorithms. With the increase of Gaussian noise, the SSIM of various algorithms increased. The SSIM of GAN was the lowest, indicating that the reconstructed image using GAN had lower similarity to the original image. The proposed image reconstruction method had a high SSIM, indicating a good reconstruction effect. Fig. 10 (b) presents the PSNR for multiple methods. This proposed algorithm had a higher PSNR compared to Mask R-CNN, indicating that the pixel quality of the reconstructed images by these two algorithms was better. The PSNR of GAN and Mean Shift was lower, indicating poor pixel quality of the reconstructed image. Table 2 presents the results of various algorithm metrics.

Table 2. Comparison results of multiple algorithm indexes.

Algorithm MSE MAE PSNR SSIM
Mean shift 46.787 43.105 0.826 27.756
Mask R-CNN 62.146 57.611 0.867 28.439
U-Net 44.838 42.519 0.835 28.103
GAN 22.553 18.914 0.823 27.228
Study proposes algorithm 8.915 7.224 0.875 28.733

Table 2 presents the multiple algorithm indicators. This proposed algorithm had excellent performance with the lowest MSE of 8.915. However, other algorithms had an MSE higher than 20, with Mean Shift achieving an MSE of 46.787, indicating poor regression performance and poor performance in image reconstruction. For MAE, the Mask R-CNN reached the highest of 57.611, indicating that the algorithm had a significant reconstruction error in the image. The proposed algorithm had the lowest MAE, only 7.224, indicating a small error and the best reconstructed image effect. For PSNR and SSIM, Mask R-CNN had higher values compared to the proposed algorithm. This proposed algorithm had PSNR and SSIM values of 0.875 and 28.733, respectively, indicating a high similarity between real and reconstructed images, which effectively achieved image reconstruction tasks. To further validate the effectiveness of the proposed algorithm, tests were conducted on two different datasets. Dataset 1 is a silk fabric dataset, which contains images of 100 different styles of silk fabrics with rich color and texture features. Dataset 2 is a cotton fabric dataset, which contains 100 different types of cotton fabrics, with samples including various colors and patterns. The test results are shown in Table 3.

Table 3. Test results of algorithms on different datasets.

Data set Index Algorithm proposed by the research institute
Silk fabric dataset MPA 94.9%
MIoU 91.1%
MAE 7.224
PSNR 28.733
Cotton fabric dataset MPA 92.5%
MIoU 88.6%
MAE 8.657
PSNR 27.890

According to Table 3, the proposed method exhibits high performance indicators on different types of fabric datasets, indicating that the algorithm has good adaptability and generalization ability. The results of the silk fabric dataset are slightly better than those of the cotton fabric dataset. This may be due to the more complex texture and richer colors of silk fabrics, which provide a wider range of testing conditions for the performance of algorithms.

5. Conclusion

A novel image reconstruction algorithm combining FCE was proposed to address the flat color design. The study first extracted color features using the FCE algorithm and then selected template image colors to generate a matching CB. Finally, image reconstruction was achieved based on the matching CB. The study examined the proposed ICRA in two parts. Firstly, the FCE algorithm was tested and then a comparative experiment was conducted on the ICRA. These experiments confirmed that after 100 iterations, DPC gradually converged to 0.36 on the test set. In the comparison experiment between DPC and other algorithms, DPC had a high MPA after multiple iterations, reaching 94.9%, with high pixel accuracy. The MIoU of DPC reached 91.1%, significantly higher than other methods. Then, this proposed image reconstruction algorithm was compared with other methods through comparative experiments. For MSE and MAE, this proposed algorithm had the lowest values of 8.9 and 7.2, respectively, indicating good regression performance and small errors. For PSNR and SSIM, Mask R-CNN had higher values compared to the proposed algorithm. The proposed algorithm had PSNR and SSIM of 0.875 and 28.7, respectively, indicating a high similarity between real and reconstructed images, which effectively achieved the color design. The proposed FCE algorithm has demonstrated superior color extraction performance on multiple types of fabrics, with MPA and MIoU significantly higher than traditional methods. Through testing on different fabric datasets, the algorithm demonstrates good robustness and can effectively adapt to different image features and color combinations. This feature makes it highly operable in practical design and production environments. Although the research has provided comprehensive and detailed image reconstruction solutions for flat color design, there are still some shortcomings. The current algorithm may perform poorly when dealing with multiple colors and patterns. In this case, the extraction of fabric features may be affected by background interference, resulting in inaccurate color extraction results. Future research could consider introducing background separation techniques and utilizing image segmentation algorithms in deep learning to process the background, thereby improving the accuracy of fabric feature extraction. Although the color matching table in the study integrates user interaction, the current method is still relatively basic and may not fully meet the complex needs of designers in the process of color selection and adjustment. Subsequent work will explore more complex user interaction interfaces in real-time applications, such as AI-based recommendation systems that allow users to receive intelligent suggestions during the design process while making color adjustments more intuitive and flexible. At the same time, combining real-time feedback from users and dynamically adjusting the color-matching table and reconstruction parameters may further improve the practicality and user satisfaction of the algorithm.

Fundings

The research is supported by: Guangxi Arts University scientific research project in 2023 “Research on Painting Materials Techniques of Acrylic in Contemporary Context” Project No.: YB202310.

References

1 
Giri R., Chimouriya S. P., Ghimire B. R., 2023, Crossing strokes examination from cromaticity diagram, Science Heritage Journal (GWS), Vol. 7, No. 1, pp. 1-8DOI
2 
Maduekeh C. O., Obinwa I. N., 2022, Impacts of the Public Procurement Act 2007 On the Procurement of Public Projects in Nigerian Tertiary Institutions, Malaysian E Commerce Journal, Vol. 6, No. 2, pp. 89-95DOI
3 
Saeed A., Husnain A., Zahoor A., Gondal R. M., 2024, A comparative study of cat swarm algorithm for graph coloring problem: Convergence analysis and performance evaluation, International Journal of Innovative Research in Computer Science Technology, Vol. 12, No. 4, pp. 1-9DOI
4 
Wu Y., 2024, Reference image aided color matching design based on interactive genetic algorithm, Journal of Electrical Systems, Vol. 20, No. 2, pp. 400-410DOI
5 
Jijun L., Suxia H., Peng L., 2020, Analysis and evaluation of tomographic gamma scanning image reconstruction algorithm, Kerntechnik, Vol. 85, No. 6, pp. 452-460DOI
6 
Jeyaraj P. R., Nadar E. R. S., 2020, Dynamic image reconstruction and synthesis framework using deep learning algorithm, IET Image Processing, Vol. 14, No. 7, pp. 1219-1226DOI
7 
Zhang J., Zhang K., Wu J., Xie X., 2021, Color segmentation and extraction of yarn-dyed fabric based on a hyperspectral imaging system, Textile Research Journal, Vol. 91, No. 7-8, pp. 729-742DOI
8 
Liu W., Wu L., Dang Y., Gou J., Xie W., Tang A., 2023, An image reconstruction modeling approach for micro-fibrous network of cellulose polymer separator concerning tensile and pore properties, Journal of Applied Polymer Science, Vol. 140, No. 22, pp. 53893-53905DOI
9 
Zhao N., Zheng Q., Liao J., Cao Y., Pfister H., Lau R. W., 2021, Selective region-based photo color adjustment for graphic designs, ACM Transactions on Graphics (TOG), Vol. 40, No. 2, pp. 1-16DOI
10 
Zhang N., Hu Q., Wang L., Meng S., Pan R., Gao W., 2021, Appearance change for colored spun yarn fabric based on image color transfer, Textile Research Journal, Vol. 91, No. 13, pp. 1439-1451DOI
11 
Zhao F., Zeng Z., Liu H., Lan R., Fan J., 2020, Semi supervised approach to surrogate-assisted multiobjective kernel intuitionistic fuzzy clustering algorithm for color image segmentation, IEEE Transactions on Fuzzy Systems, Vol. 28, No. 6, pp. 1023-1034DOI
12 
Wu T., Gu X., Shao J., Zhou R., Li Z., 2021, Colour image segmentation based on a convex K-means approach, IET Image Processing, Vol. 15, No. 8, pp. 1596-1606DOI
13 
Purohit J., Dave R., 2023, Leveraging deep learning techniques to obtain efficacious segmentation results, Archives of Advanced Engineering Science, Vol. 1, No. 1, pp. 11-26DOI
14 
Ritchie D., Guerrero P., Jones R. K., Mitra N. J., Schulz A., Willis K. D., Wu J., 2023, Neurosymbolic models for computer graphics, Computer Graphics Forum, Vol. 42, No. 2, pp. 545-568DOI
15 
Terris M., Dabbech A., Tang C., Wiaux Y., 2023, Image reconstruction algorithms in radio interferometry: From handcrafted to learned regularization denoisers, Monthly Notices of the Royal Astronomical Society, Vol. 518, No. 1, pp. 604-622DOI
16 
Chukwu N. C., James E. E., Emmanuel J. I., Inyang I. B., 2023, Packaging attributes and consumers' patronage of milk products, Sustainable Development, Vol. 6, No. 3, pp. 160-178DOI
17 
Alanko J. N., Vuohtoniemi J., Mäklin T., Puglisi S. J., 2023, Themisto: A scalable colored k-mer index for sensitive pseudoalignment against hundreds of thousands of bacterial genomes, Bioinformatics, Vol. 39, No. 1, pp. i260-i269DOI
18 
Besta M., Gerstenberger R., Peter E., Fischer M., Podstawski M., Barthels C., Hoefler T., 2023, Demystifying graph databases: Analysis and taxonomy of data organization, system designs, and graph queries, ACM Computing Surveys, Vol. 56, No. 2, pp. 1-40DOI
19 
Luo Y., Zhang Y., Xing T., He A., Zhao S., Huang Z., Xu W., 2023, Full-color tunable and highly fire-retardant colored carbon fibers, Advanced Fiber Materials, Vol. 5, No. 5, pp. 1618-1631DOI
20 
Diedrich K., Krause B., Berg O., Rarey M., 2023, PoseEdit: Enhanced ligand binding mode communication by interactive 2D diagrams, Journal of Computer-Aided Molecular Design, Vol. 37, No. 10, pp. 491-503DOI

Author

Jifeng Zhong
../../Resources/ieie/IEIESPC.2025.14.6.741/au1.png

Jifeng Zhong obtained his Master of Fine Arts from the Central Academy of Fine Arts (CAFA) in 2015. He is currently employed at the School of Fine Arts, Guangxi Arts University. He has been invited to teach at Jilin Arts University and China Women’s University. His research fields include public art, multimedia art, illustration art, AI painting and image processing.

Yongguang Wei
../../Resources/ieie/IEIESPC.2025.14.6.741/au2.png

Yongguang Wei graduated from East China Normal University with a Bachelor of Science degree in educational technology in 2013. Currently, he serves as a full time teacher in the Digital Media Technology program at Guangxi Eco-Engineering Vocational & Technical College. He excels in computer image processing and post production of film and television works, grasps the implementation steps and technical details of image color reconstruction algorithms, and is proficient in using image processing software for image editing and processing.