Malumot 1
Malumot 1
1
Faculty of Technical Sciences, University of Pristina in Kosovska Mitrovica, Kneza Milosa 7,
38220 Kosovska Mitrovica, Serbia; vladimir.maksimovic@pr.ac.rs (V.M.);
jelena.todorovic@pr.ac.rs (J.T.)
2
Academy of Technical and Art Applied Studies, School of Electrical and Computer
Engineering, Vojvode Stepe 283, 11000 Belgrade, Serbia; mirko.milosevic@viser.edu.rs
3
Directorate for Railways, Nemanjina 6, 11000 Belgrade, Serbia; lazar.mosurovic@gmail.com
* Correspondence: branimir.jaksic@pr.ac.rs
https://doi.org/10.3390/s25010087
Sensors 2025, 2 of 33
25, 87
MRI images, the principle behind them can be applied to other types of medical
images, including the CT and retinal images discussed in this paper.
for large datasets and computational power during training and inference,
which limits their application in resource-constrained environments.
2. System Model
Over 2000 images are analyzed in this paper, all of which were taken
from medical databases (Retina, lung, and brain tumor segmentation) with
their corresponding ground truth, and their complexity was assessed based
on the mean value of spatial information (SI mean) [11]. Complexity is
divided into three types of medical images and after that is confirmed via
calculation by computing the spatial information values in each image
applying the Sobel filter to the horizontal and vertical components of the
image, and then calculating the standard mean value, standard deviation,
and root mean square error [11]. Typically, SI mean is used as the primary
measure because it has often shown the best results in predicting image
complexity. Based on these values, three complexity criteria were established,
namely low complexity (LD), medium complexity (MD), and high complexity
(HD). In other words, boundaries were defined to represent a small number of
details, a medium number of details, and a high number of details in the
image. The newly proposed approach for estimating threshold values in [11]
was tested in situations involving images affected by various types of noise
and varying noise concentrations. Figure 1 illustrates an example of an image
with low, medium, and high complexity, along with its ground truth.
how different parameter settings can affect the textural features and noise
level in MRI images. These data are significant for the evaluation of algorithms
in the context of variable acquisition conditions, similar to how this paper
analyzes the impact of different types and intensities of noise on the
performance of edge detection in CT and other medical images. By using
datasets with different modalities and image complexities, such as CT, retinal
images, and MRI with quality variations, the specific challenges of edge
detection can be further investigated, including the robustness of algorithms
to noise and variability of the acquisition parameters.
Different reconstruction kernels in CT scanning, such as sharp and soft
kernels, signifi- cantly affect the image characteristics, including the noise
level. Sharp kernels emphasize edges and small details, but often increase
the level of Gaussian noise, while soft kernels reduce noise at the expense of
reducing the edge sharpness. This impact is described in the literature, which
analyzes the transformation between sharp and soft kernels using filtering
techniques. In this paper, we do not include an analysis of the effects of
different kernels because our focus is not on the specifics of reconstructive
algorithms in CT scanning, but on the generalization of the edge detection
algorithm in regard to dense medical images.
Edge detection serves to single out the desired objects in an image, for
which reason as good detection as possible should be performed. A total of
five edge detectors were used (Canny, LoG, Sobel, Prewitt, Roberts) over the
images of different complexity and of different noise concentration (small,
medium, and high noise intensity in the image). In the analysis, noise was
added to each image, namely three noise types: salt and pepper, Gaussian,
and speckle with the intensities of 0.01, 0.05, and 0.1. The objective measure
(F–F1 Score) was used to verify the results [23]. FOM and PR objective
measures were also computed, but for brevity of the manuscript, only the F
measure is presented in the graphs. Figure 2 shows small (0.01), medium
(0.05), and high (0.1) noise intensities for various types of noise and the
Canny operator. The standard algorithm with default values for the Canny
operator was utilized. It is evident that noise significantly affects edge
detection, and further work will compare each edge detection operator for
such images, with the addition of a new approach described in [11].
Additionally, all five operators will be applied using this approach.
(a) (b)
Figure 3. The flow chart for the proposed approach to threshold discovering based on
(a) the grid search method, (b) the random search method.
All the procedures for implementing the new approach were repeated and
imple- mented in the same way and the same database for testing but now on
the images affected by noise (different implementation situation). The
flowchart of the approach in [11] is as follows:
Step 1: Loading the images from the dataset and ground truth images from
the database. That is, the image org (image from the dataset) and the image
with reference edges gt (ground truth image) are loaded.
Sensors 2025, 7 of 33
25, 87
Step 2: Loading the dataset with threshold values (th) of 100 values
depending on the detector. For each detector, there is a dataset of 100
values. The Canny detector contains 200 values because it has two
thresholds.
Step 3: Edge detection is performed over the images from the dataset
(edge(orgI, th)), but th is selected using the GS method so that the dataset
containing 100 values with thresholds is also selected for the best value, i.e.,
threshold th that gives the best edge detection. The threshold is selected by
using the GS method by going through the entire dataset and taking the
threshold that gives the best PR, F, and FoM value as the edge detection
threshold. Objective measures require a reference image with an ideal edge
(ground truth) and thus during the edge detection and finding the best
threshold PR, F and FoM are obtained by comparing the ideal image with the
detected edge and the image detected with the current threshold value.
When it comes to the RS3, RS6, and RS9 methods, unlike the GS method
where all values from the dataset are searched to find the best value, here 3,
6, and 9 random values from the dataset are taken.
Step 4: The output is an image with the best detected edges.
Figure 4a,b show the algorithm’s complexity. As evidenced by Figure 4, this
complexity hinges upon the level of detail within the image, specifically the
resolution and dataset size. As the dataset values and resolution increase, the
complexity escalates exponentially. The algorithm demonstrates robust
runtime performance across the tested medical images dataset. Based on
Figure 4, it is noteworthy that using the RS9 method yields notably high
performance even with larger datasets. The findings of this study underscore
the efficacy of RS9, recommending its application for optimizing algorithm
performance with larger datasets.
(a) (b)
Figure 4. Algorithm complexity using GS and RS9: (a) 2D, (b) 3D.
but slowest, and RS is faster with controlled reduced accuracy. CNN models
require more inference time on the GPU, but offer superior noise immunity
and higher accuracy.
The proposed method of estimating thresholds using GS algorithms offers
precise parameter setting for edge detection, but has certain limitations in
terms of computational efficiency, while RS9 provides better optimization of
the use of computer resources. GS, as a deterministic method, requires the
examination of all the possible combinations of thresholds, which increases
the execution time and memory consumption when working with high-
resolution medical images or large datasets. In comparison, RS reduces the
execution time with a trade-off in accuracy, as it randomly selects a subset of
possible thresholds. The main challenge of the proposed method lies in its
scalability to large datasets and high-resolution images. GS can become
computationally inefficient, and future work envisages possible solutions that
include parallel processing, the use of more efficient heuristic methods (e.g.,
Bayesian Optimization), or cloud computing.
3. Results
In Figure 5 [11], the results of edge detection at different image
complexities are given. A total of five detectors (Canny, LoG, Sobel, Prewitt,
Roberts) and three objective measures (F, FoM, PR) were used [9]. Based on
the obtained values of these measures, the quality of the detected edge
depends on the number of details. Based on that fact and the results
accounted for in Figure 5, for LD images, the best edge detection was
achieved by using the Roberts operator, but the Sobel and Prewitt operators
generated similar results. As for the MD images, the Roberts operator led to
the best results. The Canny operator was the best choice for HD images [11].
Figure 5. The values obtained by applying the standard approach for the images with
LD, MD, and HD using the five edge detectors (a) F, (b) FoM, (c) PR values.
First, edge detection was performed using the standard algorithm, and
then the proposed edge detection approach [11] was used, which selects the
best threshold value based on the random and grid searches to perform the
best possible edge detection.
Figure 6 shows the F values for the LD, MD, and HD images over which
edge detection was performed, and which contain in themselves the salt and
pepper noise with the intensities of 0.01, 0.05, and 0.1, respectively. Detection
was performed over these images for the five detection operators. According
to Figure 6, the best was the Canny detector for all the three complexity
levels. As the noise concentration increased to 0.05 (Figure 6b), Canny
recorded the best results, although all those values were but slightly lower in
comparison with 0.01 (Figure 6a), particularly so for the LD images. As the
noise concentration increased to 0.1 (Figure 6c), the values were considerably
lower, which means that the edge detection itself is worse as well. As in the
previous cases, Canny recorded the best results, and noise considerably
worsened the detection for the LD images. According to Figure 6, it can also
Sensors 2025, 9 of 33
25, 87
be concluded that salt and pepper noise had an influence on the edge
detection to a great extent, particularly so in the LD images.
Figure 6. The F values obtained by applying the standard method for LD, MD, and HD
images in the presence of the salt and pepper noise with the intensities of (a) 0.01, (b)
0.05, and (c) 0.1.
In Figure 7, the F values for the LD, MD, and HD images over which edge
detection was performed and which contain the speckle noise with the
intensities of 0.01 (Figure 7a),
0.05 (Figure 7b), and 0.1 (Figure 7c), respectively, are shown. In the case of
the noise concentration being 0.01, the gradient operators recorded
considerably better results for the LD images in relation to the LoG and Canny
operators. These operators proved to be the better solution for both the MD
and HD image as well. However, when there was a further increase in the
level of noise in the image with the noise with the intensity of
0.05 and when the number of details in the image was small, Prewitt and
Sobel recorded good results, whereas Roberts recorded considerably lower
values, which can be seen in Figure 7. For MD and HD images, the Roberts
operator recorded extremely bad results, particularly so for the MD images.
For the HD images, all the operators, except for the Roberts operator,
recorded quite similar results. Comparing it with Figure 5 which shows the
absence of noise, it can be noticed that the results are satisfactory to a good
extent. With high noise concentration and the speckle noise with the intensity
of 0.1 is concerned, the Canny operator recorded the best results for the MD
and HD images, whereas the Prewitt operator did so for the LD images. In this
case as well, Roberts led to the worst results, i.e., the worst edge detection
whose detection was not usable for further processing. In comparison with
the lower noise concentration, the detection was the worst, i.e., lower F
values were obtained, as expected.
Figure 7. The F values obtained by applying the standard method for the LD, MD, and
HD images in the presence of the speckle noise with the intensities of (a) 0.01, (b) 0.05,
and (c) 0.1.
Figure 8 shows the F values for the images with LD, MD, and HD over
which edge detection was performed and which contain the Gaussian noise
with the intensities of
Sensors 2025, 10 of
25, 87 33
0.01 (Figure 8a), 0.05 (Figure 8b), and 0.1 (Figure 8c), respectively. For
noise with the
Sensors 2025, 11 of
25, 87 33
intensity of 0.1, the best results were obtained by using the Sobel and Prewitt
operators. For the MD images, the best results were obtained by using the
Prewitt operator. The Roberts operator also recorded very bad results in this
case as well. The increase in noise concentration to 0.5 and then to 0.1 (Figure
8b,c) allowed one to see the operator’s behaviors similar to one another to a
great extent and the obtained edge detection values. The reason for this is
attributed to the very model of Gaussian noise. Comparing it with Figure 5
showing the absence of any noise at all in the image, however, allows one to
notice that Gaussian noise had a considerable influence on the edge detection
for all the categories of complexity, but the most for the LD images.
Figure 8. The F values obtained by applying the standard method for the LD, MD, and
HD images in the presence of Gaussian noise with the intensities of (a) 0.01, (b) 0.05,
and (c) 0.1.
If the noise types are compared with edge detection, it can be noticed
that to a great extent, noise exerts an influence on the quality of edge
detection. Salt and pepper and speckle influenced the LD images, particularly
so when there was a greater intensity of noise. When salt and pepper noise is
present, Canny proved to be the best operator for all the three complexity
categories. Canny also generated the best results in the case of the speckle
noise type for the LD and HD images, while the Prewitt operator provided the
best results for LD images. When speaking about Gaussian noise, Prewitt was
the best operator for all the three complexity categories.
in Figure 5 when the proposed approach was used but only over the images
without noise, it can be seen that there were very good improvements, i.e.,
very good edge detection, even in the images with a high concentration of
noise.
(a) (b)
(c)
Due to the volume of the work, only the detection for the Canny operator
is shown when the sum intensity was 0.01, 0.05, and 0.1. The images for the
other edge detection operators are also available on request. Comparing this
image with the results when the standard approach was used, it is noticed
that when the intensity of the sum was small, and the number of details was
medium, better results were obtained. As can be seen from the results, the
best results were obtained for a small number of details in the image.
Figure 10 shows the F values for the LD, MD, and HD images over which
edge detection was performed, and which also had speckle noise with the
intensities of 0.01 (Figure 10a),
0.05 (Figure 10b), and 0.1 (Figure 10c), respectively.
The best detection for all the levels of details in the image was achieved
by applying the Canny operator for a low concentration of noise, i.e., 0.01
(Figure 10a). When the noise in the image had the intensities of 0.05 (Figure
10b) and 0.1 (Figure 10c), the best results were also recorded by using the
Canny operator. Although the values were slightly lower in the case of a high
noise concentration, better results were to a great extent obtained by
applying the proposed approach based upon the grid threshold search
method. If it is compared with the situation when there was salt and pepper
noise for the LD images, the values were better in relation to speckle noise, so
that salt and pepper affected the edge detection more, whereas the influence
was to a great extent similar in the case of the MD and HD images. Figure
10d–f show the detection for the Canny operator for all three levels of speckle
sum intensity in the image. Also, as in the previous case, the best results
were achieved for a low intensity in the image, but were visibly better than the
original approach.
Sensors 2025, 13 of
25, 87 33
Figure 10. The F values obtained by applying the proposed approach based on the GS
threshold search method for LD, MD, and HD images in the presence of the speckle noise
with the intensities of
(a) 0.01, (b) 0.05, and (c) 0.1 and visual edge detection on that image using Canny
operator for noise intensities of (d) 0.01, (e) 0.05, and (f) 0.1.
Figure 11 also shows the F values for the images with LD, MD, and HD
over which edge detection was performed, which on their part also contain
the Gaussian noise with the intensities of 0.01 (Figure 11a), 0.05 (Figure
11b), and 0.1 (Figure 11c), respectively.
Figure 11. The F values obtained by applying the proposed approach based on the GS
threshold search method for LD, MD, and HD images in the presence of Gaussian noise
with the intensities of
(a) 0.01, (b) 0.05, and (c) 0.1 0.1 and visual edge detection on that image using Canny
operator for intensities of (d) 0.01, (e) 0.05, and (f) 0.1.
Sensors 2025, 14 of
25, 87 33
As in the previous cases, the Canny operator recorded the best results for
all the three complexity categories and for all the three noise intensity levels.
In comparison with the previous noise types, a lower value was only recorded
for the LD images, whereas better results were obtained for the MD and HD
images when there was the Gaussian noise type with a great noise intensity
in the image. The results show that even in this case, an improvement was
made if detection is compared with the results when the proposed ap- proach
was not used. Comparing the results obtained for the Canny operator and
when the Gaussian sum was present for all three intensities, the new method
achieved significantly better results compared to the standard one, so it
detected the edge very efficiently.
Figure 12 shows what can be seen that when the Rician type of noise
intensity was present. Figure 12 shows the results for (a) 0.05, (b) 0.1, and (c)
0.15, and the F values are shown. Figure 12 also shows the edge detection
when the Canny operator was applied, while Figure 12g–i show the detection
when the Sobel operator was applied for the de- scribed noise intensity. The
results show that when Rician noise was present, algorithms based on a
simple technique with masks such as Sobel, Prewitt, and Roberts gave better
results than by using the Canny and LoG edge detection methods, especially
when low- and medium-intensity noise was present. By comparing the one
shown in Figure 12 with the results shown in Figure 9, Figure 10, and Figure
12, it can be seen that the GS method, when Rician noise of low and medium
intensity is present, is much more effective than if methods such as Sobel,
Prewitt, and Roberts are used. Also, the conclusion is reached when the GS
method is effective and when a high intensity of Rician noise is present.
Figure 12. The F values obtained by applying the proposed approach based on the GS
threshold search method for LD, MD, and HD images in the presence of Rician noise
with the intensities of
(a) 0.05, (b) 0.1, and (c) 0.15 and visual edge detection on that image using Canny
operator for noise intensities of (d) 0.01, (e) 0.1, and (f) 0.15 and for Sobel (g) 0.05, (h)
0.1, and (i) 0.15.
Figure 14 also shows the F values for the LD, MD, and HD images over
which edge detection was performed, and which also contained speckle noise
with the intensities of
0.01 (Figure 14a), 0.05 (Figure 14b), and 0.1 (Figure 14c), respectively. Figure
14d–f show the detection for the Canny operator when it was the used
approach based on nine random values for the threshold for the images with
speckle noise with 0.01, 0.05, and 0.1 intensities.
The Canny detector recorded the best results when noise with the
intensity of 0.01 of the speckle noise type in the LD images. However, good
detection was also achieved by the other operators, except for the Roberts
operator. The Roberts operator recorded the best detection for the MD and HD
images. A further increase in noise to an intensity of 0.05 led to the detection
in which the Prewitt operator achieved the best results for the LD images,
whereas the Sobel, Prewitt, and Roberts operators recorded comparable
results for the MD and HD images. Also, the Canny operator recorded good
detection for the HD images. In the case when the intensity of noise in the
image was 0.1, the Prewitt operator recorded the best detection for all the
three complexity levels, but the values generated by the other operators were
approximate for the HD images.
Figure 15 shows the F values for the LD, MD, and HD images over which
edge detection was performed, and which were affected by Gaussian noise
with the intensities of 0.01 (Figure 15a), 0.05 (Figure 15b), and 0.1 (Figure
Sensors 2025, 16 of
25, 87 33
15c), respectively. Figure 15d–f show
Sensors 2025, 17 of
25, 87 33
the detection for the Canny operator when it was the used approach based on
nine random values for threshold for the images with Gaussian noise with
0.01, 0.05, and 0.1 intensities.
Figure 13. The F values obtained by applying the proposed approach based on the
RS9 threshold search method for LD, MD, and HD images in the presence of salt and
pepper noise with the intensities of (a) 0.01, (b) 0.05, and (c) 0.1 and visual edge
detection on that image using Canny operator for noise intensities of (d) 0.01, (e)
0.05, and (f) 0.1.
Figure 14. The F values obtained by applying the proposed approach based on the
RS9 threshold search method for LD, MD, and HD images in the presence of speckle
noise with the intensities of
(a) 0.01, (b) 0.05, and (c) 0.1 and visual edge detection on that image using Canny
operator for noise with intensities of (d) 0.01, (e) 0.05, and (f) 0.1.
Sensors 2025, 18 of
25, 87 33
Figure 15. The F values obtained by applying the proposed approach based on the
RS9 threshold search method for LD, MD, and HD images in the presence of Gaussian
noise with the intensities of
(a) 0.01, (b) 0.05, and (c) 0.1 and visual edge detection on that image using Canny
operator for noise intensities of (d) 0.01, (e) 0.05, and (f) 0.1.
When there was a noise intensity in the image of 0.01, Figure 15a allows
one to notice that the values are to a great extent similar to when there were
six random values, which can be seen in Figure 15a, but there is still
improvement, particularly so in regard to the Canny operator. The Sobel and
Prewitt operators achieved the best detection for all the three complexity
levels. For the HD images, Canny and Roberts achieved almost equal
detection. When the intensity of noise in the image was 0.05 (Figure 15b), the
situation was quite similar even when there was a lower noise concentration
(which was expected due to the very nature of noise) and exerted an
influence on the Canny operator when there were LD images. When the
intensity of noise in the image was 0.1 (Figure 15c), it could be noticed that it
influenced the Canny operator and LD images the most, whereas the
detection was similar for the other operators to what it was under the
previous conditions of noise concentration for all the complexity categories.
Figure 16 shows what can be seen that when the Rician type of noise
intensity was present. Figure 16 shows the results for (a) 0.05, (b) 0.1, and (c)
0.15 where the F values are shown. Figure 16 also shows the edge detection
when the Canny operator was applied, while Figure 16g–i show the detection
when the Sobel operator was applied for the de- scribed noise intensity. The
results show that when Rician noise is present, algorithms based on mask
techniques such as Sobel, Prewitt, and Roberts give better results than by
using the Canny and LoG edge detection methods, especially when low- and
medium-intensity noise was present. By comparing the obtained results with
the results when the other types of noise were present, it can be seen that
the proposed method using Sobel, Prewitt, and Roberts operators is more
efficient than Canny and LoG.
Sensors 2025, 19 of
25, 87 33
devices, but may show different performance depending on the type of noise.
Inceptionv3 is a complex model that uses multiple convolutional filters of
different sizes in each layer. This architecture allows for better edge detection
in the presence of complex noises, but can be computationally demanding.
Figure 17. Comparison of proposed approach and other approaches using Canny edge
detection on the noisy image affected by salt and pepper: (a) low intensity, (b) medium
intensity, and (c) high intensity.
Figure 18. Comparison of proposed approach and other approaches using Canny edge
detection on the noisy image affected by speckle: (a) low intensity, (b) medium
intensity, and (c) high intensity.
Figure 19. Comparison of proposed approach and other approaches using Canny edge
detection on the noisy image affected by Gaussian: (a) low intensity, (b) medium
intensity, and (c) high intensity.
intensity of this type of noise. Such approaches can preserve more detail in
some scenarios, especially when the noise intensity is extremely high.
Although the proposed model shows solid performance, at a moderate
Gaussian noise intensity, methods that combine Gaussian filtering with
sophisticated edge detection algorithms can show better results in certain
aspects, such as edge smoothness and noise reduction without a significant
loss of detail. As the noise intensity increases, the performance of all the
methods decreases. However, the proposed approach still shows good results.
The difference in performance is still significant, but slightly less pronounced
than at a low intensity. At a high noise intensity, all the models experience a
large drop in performance. Nevertheless, the proposed approach manages to
maintain relatively better results, with clearer edges and less noise. The
difference compared to other models is smaller, but still significant. In these
cases, methods using advanced speckle noise filtering techniques, such as
Wavelet decomposition, can provide better results in certain aspects, such as
preserving the edge structure while reducing noise.
There are more advanced models within machine learning that belong to
the subcat- egory of deep learning such as DexiNet (Dense Extreme Inception
Network) [32], LDC (Lightweight Dense CNN) [33], or CATS (Context-Aware
Tracing Strategy) [34]. DexiNet is a deep learning model designed for edge
extraction in images, known for its extremely detailed edge detection. It uses
an architecture inspired by Inception blocks, but is modified to include dense
connectivity and multiscale feature extraction. The goal of the model is to
identify the edges of objects in images with a higher level of detail, especially
in scenarios where the edges are thin and indistinct. They are used for
multiscale analysis, allowing the model to identify edges at different
resolutions. Each layer is connected to all the previous layers, thus improving
the feature propagation and optimization during training. Thanks to the
densely connected architecture and multiscale analysis, it is suitable for
images with a large number of details, but also for images covered by forest.
However, due to its architecture, DexiNet requires significant processing
resources [32]. Even better results are shown by the LDC method, which is
also based on the DexiNet algorithm; however, in order to seek a better
compromise between performance and application, a smaller filter size and
compact modules were considered. As a result of the modification, a model
with less than 1 M parameters is obtained, which is fifty times smaller than
Dex- iNet, as well as lighter than most state-of-the-art approaches [33]. CATS
is a model based on a context-aware tracking strategy for ivic detection based
on an observation that the localization ambiguity of deep edge detectors is
mainly caused by the mixing phenomenon of convolutional neural networks:
feature mixing in edge classification and side mixing during fusing side
predictions [34]. The results in the paper [34] show that CATS provides better
detection than RCF (Richer Convolutional Features) and BDCN (Bi-Directional
Cas- cade Network) by factors of 12% and 6%, respectively, when evaluating
without using the morphological non-maximal suppression scheme for edge
detection [34]. AI models have demonstrated state-of-the-art performance in
edge detection tasks, especially in complex and noisy images. However, it is
important to note that AI-based solutions, particularly deep learning models,
are not entirely robust or safe. Recent studies have demonstrated that even
minor perturbations, such as changing the value of a single pixel, can
drastically alter the results of classification or edge detection. This
phenomenon, known as adversarial attack, raises concerns about the reliability
of AI in sensitive domains like medical imaging. For instance, in [35], it is
highlighted how adversarial attacks could compromise medical image
classification, emphasizing the need for robust AI models that are resilient to
such manipulations. This represents an important challenge for future work,
particularly in ensuring the robustness of AI-based edge detection methods
under adversarial conditions.
Sensors 2025, 24 of
25, 87 33
research, where their influence on the trade-off between noise reduction, edge
preservation, and computational efficiency of the algorithm would be examined
in detail.
4. Contributions
The results make a contribution to the field of medical image processing,
with a special emphasis on edge detection in images affected by different
types of noise. The key contributions of the paper can be summarized as
follows:
Two proposed methods were tested for evaluating the optimal edge
detection thresh- olds over medical images affected by different types and
intensities of noises that are mainly found in nature images.
First, a GS method that searches the threshold parameter and provides
the most accurate edge detection results was tested. The testing has shown
that GS achieves the maximum F-measure even for highly complex images
(with a high number of details), which confirms its accuracy in difficult
conditions.
Second, the RS9 method significantly reduces the processing time (0.75 s
per image) with minimal memory usage (0.01 MB), providing a balance
between efficiency and accu- racy. This contribution is particularly significant
for real-time applications and applications in systems with limited computing
resources.
Also, a comparative analysis of the performance of five traditional
detectors (Canny, LoG, Sobel, Prewitt, and Roberts) on medical images
affected by different noises is pre- sented: salt and pepper, speckle, Gaussian,
and Rician noise. The Canny operator achieves the best edge detection
accuracy on images with Gaussian and speckle noise, especially when a high
complexity exists. The Sobel and Prewitt operators show greater resistance to
Rician noise, while the results are stable on images of medium complexity.
The Roberts operator gives the most efficient results for low-complexity
images, with a significantly shorter execution time. These results provide a
clear framework for algorithm selection depending on the complexity of the
image and the type of noise present.
Compared with deep learning methods, the proposed approach shows the
following results: With regard to the execution time, GS requires 7.35 s per
image, while RS9 achieves processing in 0.75 s, thus confirming its suitability
for applications that require fast data processing. For memory load, the RS9
has negligible memory usage (0.01 MB), while DL models require significantly
more resources due to the large number of parameters and the need for GPU
inference. By comparison with CNN models (e.g., U-Net, DexiNet, and LDC), it
was concluded that the traditional threshold optimization methods, especially
RS9, offer a sufficiently high accuracy with far lower computational usages.
By testing on images with three levels of complexity (low, medium, and high)
and different noise intensities (0.01, 0.05, and 0.1), the robustness of the
proposed methods was confirmed. Threshold optimization enables efficient
edge detection even in high-noise images, thus contributing to the better
segmentation of structures in medical images. The research results are
directly applicable in medical image processing.
The proposed approach can improve edge detection, especially when
there is noise in the image, for example the detection of blood vessels in
retinal images, which is crucial for the early recognition of diabetic
retinopathy or the segmentation of tumor edges in brain MRI images,
enabling the more precise monitoring of changes or the detection of nodules
in lung CT scans, which facilitates the early detection of malignancy. The
proposed approach shows robust performance even in the presence of noise,
while its efficiency in terms of time and memory requirements opens
possibilities for application in systems with limited resources, as well as in
real-time applications for medical diagnostics.
Sensors 2025, 27 of
25, 87 33
As part of the contribution, the potential directions for further
development are iden- tified, such as the integration of an edge-preserving
filter (Bilateral Filter, Anisotropic
Sensors 2025, 28 of
25, 87 33
Diffusion) to improve the edge detection in images with extreme noise levels.
The optimiza- tion of the execution time can be achieved through the parallel
processing and application of Bayesian Optimization algorithms. The
development of hybrid systems that combine the advantages of traditional
algorithms with deep learning methods can achieve an optimal balance
between the accuracy and speed. One of the potential limitations is the
longer execution time of the GS method with larger datasets. Nevertheless,
the research results show that the proposed number of thresholds and the
tested dataset provide good enough results, and it is not necessary to
additionally increase the volume of datasets for practical applications (except
in cases where a specific application requires it). On the other hand, the RS9
method shows significantly less sensitivity to the size of datasets and
successfully maintains the efficiency, even when applied to larger datasets.
5. Conclusions
In the paper, an analysis of the impact of noise on edge detection was
conducted as well as a comparative analysis of the impact of noise on the
effectiveness of the proposed threshold value estimation approach [9]. For
the analysis of a medical images dataset together with its appropriate
reference, an ideal-edge image (ground truth) was used. The analysis was
performed for the noise types, namely salt and pepper, speckle, Rician, and
Gaussian. For each noise type, a different intensity, i.e., different
concentrations of noise in the image (0.01, 0.05, and 0.1, respectively), was
used. The images were categorized as per their complexity (low, medium, and
high), which was determined based upon the spatial information in the image.
The results of the analysis show that the proposed approach was quite
suitable when images affected by noise are concerned, particularly so when
the Canny operator was applied. The approach has demonstrated remarkable
resilience across the varied terrains of noise. Also, the findings not only
underscore the importance of a tailored threshold value estimation method
but also highlight its adaptability to different noise scenarios. This adaptability
is particularly vital in the real-world application of edge detection, where
images often contend with a multitude of noise sources. Furthermore, the
categorization of images into complexity classes based on their spatial
characteristics has enriched our understanding of noise’s impact. We
observed that the effectiveness of the proposed approach remains
consistent, regardless of an image’s complexity. This observation bodes well
for practical applications, as real-world images are seldom uniform in their
spatial characteristics.
The results provide a good analysis and a good comparison for further
research efforts, such as the optimization of the access parameters with the
help of machine learning for image filtering in the presence of noise. The
findings presented in this study offer a strong foundation for future research
endeavors, in particular, the application of machine learning techniques for the
optimization of edge detection parameters in noisy conditions. Machine
learning’s adaptive capabilities may offer a dynamic solution to the persistent
challenge of noise in image processing. This avenue of research has the
potential to refine the existing techniques achievable in real-world
applications.
The direction of future research will be the additional optimization of
algorithms over a dataset consisting of a larger number of images, as well as
specialized images such as MRI, CT, satellite images, etc. and also the
application of new optimization techniques, especially those based on deep
learning and machine learning.
Data Availability Statement: The data supporting the findings of this study are
derived from the Kaggle website which is publicly accessible at
https://www.kaggle.com/datasets/beosup/lung- segment (accessed on 12 November
2024) for the lung segmentation dataset, https://www.kaggle.
com/datasets/nikhilroxtomar/brain-tumor-segmentation (accessed on 12 November
2024) for the brain tumor segmentation dataset, and
https://www.kaggle.com/datasets/andrewmvd/drive- digital-retinal-images-for-vessel-
extraction (accessed on 12 November 2024) for the Retina dataset. The dataset
includes images and their ground truth edge maps used in our analysis. Simulated
noisy images and the results of the edge detection experiments can be obtained from
the corresponding author upon reasonable request.
20. Baltierra, S.; Valdebenito, J.; Morales, M.M. Edge detection in images with multiplicative noise using the Ant
Colony System algorithm. Eng. Appl. Artif. Intell. 2022, 110, 104715. [CrossRef]
21. Li, W.; Zhang, L.; Wu, C.; Zhenxiang, C.; Chao, N. A new lightweight deep neural network for surface scratch
detection. Int. J. Adv. Manuf. Technol. 2022, 123, 1999–2015. [CrossRef] [PubMed]
22. Obuchowicz, R.; Piórkowski, A.; Urbanik, A.; Strzelecki, M. Influence of acquisition time on MR image quality
estimated with nonparametric measures based on texture features. Biomed Res. Int. 2019, 2019, 3706581.
[CrossRef] [PubMed]
23. Maksimovic, V.; Jaksic, B.; Petrovic, M.; Palevic, P. New approach to edge detection on different levels of wavelet
decomposition.
Comput. Inform. 2019, 38, 1067–1090. [CrossRef]
24. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–
305.
25. Kim, S.; Jung, M.; Park, J. A study of tool wear measurement using image processing. J. Korea Robot. Soc.
2024, 19, 65–70. [CrossRef]
26. Hu, G.; Saeli, C. Enhancing deep edge detection through normalized Hadamard-product fusion. J. Imaging 2024,
10, 62. [CrossRef] [PubMed]
27. BenHajyoussef, A.; Saidani, A. Recent advances on image edge detection. In Digital Image Processing:
Latest Advances and Applications; IntechOpen: London, UK, 2024. [CrossRef]
28. Sun, R.; Lei, T.; Chen, Q.; Wang, Z.; Du, X.; Zhao, W.; Nandi, A.K. Survey of image edge detection. Front. Signal
Process. 2022,
2, 826967. [CrossRef]
29. Raj, D.M.D.; Shanmuganathan, H.; Geetha, A.; Keerthika, V. EGF: An Improved Edge Detection Model for Low-
Resolution Images. In Proceedings of the 2nd International Conference on Futuristic Technologies (INCOFT),
Belagavi, India, 24–26 November 2023. [CrossRef]
30. Yan, J.; Zhang, L.; Luo, X.; Peng, H.; Wang, J. A novel edge detection method based on dynamic threshold
neural P systems with orientation. Digit. Signal Process. 2022, 127, 103526. [CrossRef]
31. Kalbasi, M.; Nikmehr, H. Noise-robust, reconfigurable Canny edge detection and its hardware realization. IEEE
Access 2020, 8, 39934–39945. [CrossRef]
32. Soria, X.; Sappa, A.D.; Humanante, P.; Akbarinia, A. Extreme inception network for edge detection. Pattern
Recognit. 2023,
139, 109461. [CrossRef]
33. Soria, X.; Pomboza-Junez, G.; Sappa, A.D. LDC: Lightweight Dense CNN for edge detection. IEEE Access 2022,
10, 68281–68290. [CrossRef]
34. Huan, L.; Xue, N.; Zheng, X.; He, W.; Gong, J.; Xia, G.-S. Unmixing convolutional features for crisp edge
detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6602–6609. [CrossRef] [PubMed]
35. Tsai, M.J.; Lin, P.Y.; Lee, M.E. Adversarial attacks on medical image classification. Cancers 2023, 15, 4228.
[CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of
the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim
responsibility for any injury to people or property resulting from any ideas, methods, instructions or products
referred to in the content.