The color characteristics of a fabric are of significant importance in determining
its visual effect and market value. As a result, it is of paramount importance to
ensure the accurate extraction of fabric color in the context of design and production.
Traditional color extraction methods often have limitations when dealing with complex
textures and diverse patterns. A novel FCE algorithm has been proposed, which combines
bilateral filtering and DPC techniques. This method not only improves the accuracy
of color extraction but also considers the complexity of fabric texture, making the
extracted colors more reflective of the actual characteristics of the fabric. The
study first constructs a fabric color feature extraction model. The integrity of fabric
image contours is ensured through bilateral filtering processing. Then the DPC is
used for color clustering analysis. Next, a matching Color Table (CB) is constructed
and the effect of the color matching table is quantified, enabling users to complete
interactive color selection.
3.1. Extraction of Fabric Colors
As an important carrier of information transmission, images enable people to perceive
directly through vision. Color space, as a different representation of image color
structure, is divided into hardware oriented and object oriented types based on the
different objects. The foundation of color information research is color space, which
quantifies people’s subjective perception of color into specific numerical values
and provides a basis for color expression [13,14]. However, the choice of color space can affect the accuracy of image color reconstruction.
The color space selected for image color reconstruction must have both independence
and uniformity. Independence requires that the components of the color space do not
affect each other, and changes in one component will not cause changes in other components
[15,16]. Uniformity requires that the equal values of each component in the color space can
produce approximately consistent visual changes. There are multiple ways to express
color space, and existing color spaces include Red- Green-Blue (RGB), International
Commission on Illumination Color Space (CIE-lab and Lab), and Hue-Saturation-Value
(HSV), etc. [17]. Fig. 1 shows the ICRA for implementing graphic design research.
Fig. 1 shows the flowchart of the color reconstruction algorithm, which mainly includes
FCE, color interaction selection, matching color generation, and multi-objective collaborative
color transfer. For fabrics, conventional color extraction methods are not applicable.
Most fabrics have complex organizational structures, yarn textures, and pattern patterns
on their surfaces, which affect subsequent color extraction. To eliminate the influence
of organizational structure and yarn texture structure in color extraction, an FCE
method based on texture filtering and DPC is used. Firstly, clear fabric texture images
are input, and then denoising and smoothing are performed using a bilateral filtering
algorithm. In fabric image acquisition, image signals may be contaminated by noise
due to environmental and optical sensing instrument limitations. Fabric images are
preprocessed using bilateral filtering algorithms. Bilateral filtering, as a non-linear
filtering technique, is similar to Gaussian filtering in that its middle pixel value
is replaced by the weighted average of pixels within the window. Bilateral filtering
is improved based on Gaussian filtering, taking into account not only the geometric
differences of pixels but also the differences between each pixel and the center pixel
value. For points with significant differences in pixel values compared to neutral
points, smaller weights are assigned. The operation of Gaussian filtering is represented
by formula (1).
In formula (1), $I_{gf}(p)$ refers to the Gaussian filtering operation. $S(p)$ stands for a matrix
window centered on point $p$, with a size of $k \times k$. $x_p$ and $y_p$ refer to
the horizontal and vertical coordinates of $p$, respectively. $x_q$ and $y_q$ stand
for the horizontal and vertical coordinates of point $q$, respectively. $N_p$ stands
for a standardized output value. $I_{gf}(p)$ refers to the pixel value output after
$p$ Gaussian filtering. $\sigma_s$ refers to the Gaussian kernel function’s bandwidth
constant in a spatial domain. The operation of bilateral filtering is represented
by formula (2).
In formula (2), $I_{bf}(p)$ refers to the pixel value after $p$ filtering output. $W_p$ refers to
the standardized output value. $\sigma_r$ refers to the Gaussian kernel function’s
bandwidth constant in a color domain. The fabric image processed by a bilateral filter
needs to be smoothed by a rolling guide filter to smooth the texture of the fabric
organization. Rolling guided filters can ensure the accuracy of pattern edge structures
while removing image textures. The process of rolling guided filtering is represented
by formula (3).
In formula (3), $I_{bf}(p)$ refers to the pixel value output at point $p$ after Gaussian blur. Then,
by using the joint bilateral filtering formula, the blurred contour edges are restored
through iterative calculation in Fig. 2.
Fig. 2 shows the texture removal and edge restoration process of rolling guided filtering.
After Gaussian blur, the edge information of the image is removed. After multiple
iterations, the blurry edges gradually become clear. However, the lost edge information
in the image cannot be restored, resulting in a lack of complete details in the final
iteration result. To ensure the integrity of the fabric contour, a calculation method
that can effectively distinguish the texture of the fabric organization from the edges
of the pattern is used. The Sobel operator is used to calculate the pixel gradient
of the preprocessed fabric image, represented by formulas (4) and (5) [18].
Fig. 2. Rolling guided filter texture removal and edge restoration process.
In formula (4), $F_x$ refers to the gradient value in the horizontal direction. $\otimes$ refers
to the convolution operator. The vertical gradient is represented by formula (5).
In formula (5), $F_y$ refers to the gradient value in the vertical direction. The Sobel operator
has two sets of $3 \times 3$ matrices. The horizontal and vertical templates are convolved
with the image in a plane to obtain approximate brightness gradients for the horizontal
and vertical directions. Then, the gradient sum of all pixels within the sliding window
is calculated using formula (6).
In formula (6), $T_x(p)$ and $T_y(p)$ refer to the gradient values of $p$ in the $x$ and $y$ directions,
respectively. $K_x$ and $K_y$ refer to standardized outputs. $T(p)$ refers to the
gradient value of the region centered on point $p$. The larger the $T(p)$, the greater
the likelihood that point $p$ is the contour edge point. $\theta_p$ refers to the
direction of the gradient in the region at point $p$, with a value range of $[-90^\circ,
90^\circ]$. After clarifying the texture of the fabric, DPC is used for image color
clustering. Formulas (7) to (10) are the steps for DPC.
In formula (7), $\rho_i$ refers to the estimated value of pixel $i$. $N$ refers to the total pixels.
$d_{ij}$ refers to the color difference value between pixel $i$ and $j$. $\eta$ refers
to the quantity of points where $d_c$ is greater than point $i$’s color difference
value. $d_c$ represents the distance between the center point, which is the distance
between each color feature point in the fabric sample and its cluster center. This
parameter is used to determine the density of points and select appropriate clustering
centers for clustering. This density estimation method is relatively rough. To obtain
more accurate estimates, the Gaussian kernel density estimation method is used, represented
by formula (8).
Then, the minimum color difference $\delta_i$ between points with higher density and
pixel point $i$ is calculated using formula (9).
For the point having the highest density among all pixels, $\delta_i$ is the maximum
color difference with other pixels, represented by formula (10).
In formula (10), $M$ represents the total number of neighboring points or feature points involved
in the calculation. After the above calculations, a decision map is drawn and cluster
centers are selected. Then, the distance from each point to each cluster center is
calculated, and each point is classified as the closest cluster center to it. The
FCE method proposed in this study is used for color extraction of silk fabrics. Fig. 3 shows the extraction effect.
Fig. 3. Silk fabric pattern edge extraction process.
Fig. 3 shows the edge extraction of silk fabric patterns. Fig. 3(a) shows the original image of silk fabric. Fig. 3(b) presents the extraction results of the Sobel operator. Fig. 3(c) shows the regional gradient information. Fig. 3(d) shows the fabric image after texture elimination. From this image, the original silk
fabric image is transformed into a texture image with accurate edges.
3.2. Generation of Matching Color Tables
After FCE, a matching CB is generated. The generation of this table comprehensively
considers the limitations of user interaction and visual differences, which is a combinatorial
optimization problem. The generation of matching CB in image color transfer requires
certain constraints to be met, therefore it has a high degree of randomness. In the
generation of matching CB, the directed migration is solved using interactive color
selection, such as assigning color values to a certain area of a graphic design. Color
is selected based on the target sub-image of the target design image and the relevant
areas of the template image. After selecting the template and target images’ colors,
it is necessary to reconstruct these image colors based on the color structure relationship.
Fig. 4 shows the color structure relationship.
Fig. 4. Color structure relation.
Fig. 4 is a schematic diagram of the color structure relationship. Visual differences also
have a significant impact on the generation of matching CB. It is necessary to control
the visual difference of the target image CB within a certain range and maintain the
contrast matrix’s similarity to the maximum extent. In Fig. 4, L1 represents the unprocessed color chart, L2 represents controlling visual differences,
and L3 represents the processed color chart. If this visual difference of CB is greater
than its proportion difference, Z-score standardization is required, represented by
formula (11).
In formula (11), $c^*$ refers to the standardized value. $c$ represents the original value associated
with the standardization process, where the original value is used to calculate color
differences during the standardization process. $\mu$ refers to the average value
of all sample data. $\delta$ refers to the standard deviation of all sample data.
After standardizing all sample data, the root Mean Square Error (MSE) $E_c$ is represented
by formula (12).
In formula (12), $n$ refers to the quantity of samples. However, the overall effect of CB cannot
be considered from the comparison of CB. The visual difference between these color
points added to CB and CB’s overall mean can be comprehensively considered. The overall
CB contrast is represented by formula (13).
In formula (13), $E_{c1}$ refers to the mean of CB and CB’s difference. $E_{c2}$ refers to color
points’ contrast difference in CB. $E_{ca}$ refers to the overall CB contrast. The
matching effect of CB is quantified using formula (14).
In formula (14), $E$ refers to the energy value that quantifies the matching effect of CB. Based
on the target CB, continuous iterative optimization of CB selection is carried out.
The background color in the design is relatively single, and the brightness needs
to be adjusted to adapt to the overall effect. The brightness value adjustment function
is represented by formula (15).
In formula (15), $C_{tl}$ and $C_{ml}$ refer to CB’s average brightness. $I_{tl}$ and $I_{ml}$ refer
to the average brightness of the target and template images. $P_{tl}$ refers to the
original brightness of the target image. $P_{ml}$ indicates the brightness value adjusted
after color matching. Fig. 5 shows the brightness adjustment process.
Fig. 5. Brightness adaptive process.
Fig. 5 shows the brightness adaptive process. The target image can be adjusted based on
the brightness of CB. After re-matching the colors, some information may be lost,
which can be repaired by matching the missing pixels with boundary points.