Sanaullah Memon
Sanaullah Memon
3 (2024)
Sanaullah Memon*
Department of Information Technology, Shaheed Benazir Bhutto University
Shaheed Benazirabad, Pakistan.
sanaullah.memon_nf@sbbusba.edu.pk
Rafaqat Hussain Arain
Institute of Computer Science, Shah Abdul Latif University Khairpur, Pakistan.
rafaqat.arain@salu.edu.pk
Ghulam Ali Mallah
Institute of Computer Science, Shah Abdul Latif University Khairpur, Pakistan.
ghulam.ali@salu.edu.pk
Sidra Rehman
Department of Computer Science, Iqra University Karachi, Pakistan.
sidra.rehman_n@iqra.edu.pk
Javeria Barkat
Department of Computer Science, Iqra University Karachi, Pakistan.
javeria.barkat@iqra.edu.pk
Muhammad Ahmad Siddiqui
Department of Computer Science, University of the Punjab, Lahore. Pakistan.
asahmadsiddiqui@gmail.com
Abstract
Images captured in unpredictable weather conditions frequently suffer from
significant degradation. The scattering and absorption of airborne particles in the
atmosphere effect on image quality such as poor visibility, low contrast, and color
distortions. The problem of image degradation is addressed by many computer
vision applications in unpredictable weather conditions as these conditions
diminish the clarity of the visual scene due to loss of image details. The learning-
based image dehazing approaches play an imperative role to eliminate haze and
enhance the quality of haze-free image. This paper presents a review of different
learning-based image dehazing approaches which employ different techniques to
approximate atmospheric light and transmission map to restore a haze-free image
with image details and color fidelity.
310
Vol. 2 No. 3 (2024)
Introduction
Single image dehazing is an advanced computational technique utilized to recover
visibility and improve the excellence of hazy images [1] as shown in Fig. 1. The aim
of single image dehazing is to calculate the underlying scene radiance and
eliminate the unwanted atmospheric effect caused by scattering and absorption of
light due to particles in the atmosphere such as fog, smoke, dust [2]. The
atmospheric degradation diminishes the color saturation and contrast in the
captured images, making it difficult for automated systems and human viewers to
see important details [3]. The assessment of transmission map and the atmospheric
light is included by the dehazing process. The spatially diverse haze density and
degradation in various regions of the image is indicated by the transmission map.
The dominant light in the scene is denoted by the atmospheric light, which is
owing to dispersing of light by the tiny particles [4]. Recently, several algorithms
and methods have been suggested for single image dehazing, employing various
approaches like dark channel prior (DCP) [5], color attenuation [6], and image
fusion [7]. These methods often employ optimization algorithms, advanced image
processing techniques, and machine learning models to accomplish delightful
dehazing results. Single image dehazing has achieved important attention owing to
its practical applications in fields, including surveillance, autonomous vehicles, and
outdoor imaging, where it is important to get clear and visually attractive images
even in bad weather conditions [8]. Many research papers on image dehazing have
been presented in [9-12]. The comparison of five algorithms based on physical
scattering model for image dehazing is described in [9]. Various defogging
approaches based on enhancement and restoration are explored in [10][11].
Different visibility enhancement methods introduced for both uniform and non-
uniform fog conditions are presented in [12]. In this paper, a review is conducted
on various deep learning-based image dehazing approaches. These approaches
will enable readers to comprehend the effectiveness of each approach and
contribute to the development of advanced dehazing approaches.
311
Vol. 2 No. 3 (2024)
End-to-End approaches
The enhanced version of the CycleGAN framework for single image dehazing
entitled Cycle-Dehaze was presented by D. Engin et al. [13]. To enhance the
dehazing performance, several modifications to an end-to-end CycleGAN [14]
architecture are developed. The approach does not necessitate the pairing of hazy
and haze-free images for training and testing. Instead, it employs CycleGAN to
obtain the style transfer from hazy images to dehazed images. Besides, the
suggested approach does not rely on assessment of the variables related to the
atmospheric scattering model. Cycle-Dehaze improves the texture information
recovery and generates a visually superior dehazed image by integrating a
perceptual loss function into the existing CycleGAN framework as illustrated in Fig.
2. Cycle-Dehaze requires significant processing power and extensive parameter
tuning to produce haze-free images. This factor makes the approach less robust
and may require domain expertise to accomplish optimal results.
312
Vol. 2 No. 3 (2024)
H. H. Yang et al. [16] suggested network for single image dehazing named Y-Net.
The network combines multi-scale features, allowing for better representation of
haze-related details and context. It supports the wavelet transform to extract
structural information which assists in preserving significant image details during
the dehazing process. The wavelet SSIM loss function is utilized for training the
network where it employs a series of discrete wavelet transformations to segregate
the image into patches of varying sizes, each characterized by various frequencies
and scales as shown in Fig. 3. Y-Net is evaluated on the RESIDE dataset and
compared against existing image dehazing approaches. The experimental findings
show that the network accomplishes greater performance using both the
qualitative and quantitative metrics.
Fig. 3 (a) The process of the discrete wavelet transform. (b) The real image. (c) The
outcome obtained from applying the discrete wavelet transform twice. (d) The
ratios pertaining to various patches.
313
Vol. 2 No. 3 (2024)
Y. Shao et al. [17] suggested domain adaptation method to tackle the single image
dehazing problem where the training and testing data come from different
domains. The image translation module and two dehazing modules are comprised
by the domain adaptation structure. To establish a connection between synthetic
and real domains, the bidirectional translation network is employed effectively
enabling the translation of images between the two domains. The results obtained
from translation of two synthetic hazy images are shown in Fig. 4. Subsequently,
the images are utilized to train these two image dehazing networks before and
after translation, while enforcing a consistency constraint. The real hazy images into
the dehazing training process are integrated during this phase, utilizing the
characteristics of clear images to enhance the domain adaptively. While training
both the image translation and dehazing networks, the enhanced outcomes are
achieved by the approach.
Fig. 4 The results obtained from translation of two synthetic hazy images. From left
to right (a-b), (a) Synthetic hazy image, (b) Translated image.
A. Singh et al. [18] described single image dehazing approach which handles
various types of challenging haze scenarios such as dense haze and non-
homogeneous haze. The approach uses a back projected pyramid network (BPPN)
architecture that contains different blocks. The pyramid convolution technique is
developed to acquire spatial features of various levels. The iterative U-Net block
learns complex and distinct haze features without loss of the structural information.
The four contemporary challenging datasets of diverse haze scenarios are utilized
to optimize the performance. The network is trained employing the incorporation
of MSE loss, content loss, adversarial loss, and structural similarity loss. The
suggested approach is assessed on the challenging datasets and compared with
other dehazing approaches. Experimental findings show that the BPPN
accomplishes competitive dehazing performance across different types of haze
scenarios.
314
Vol. 2 No. 3 (2024)
315
Vol. 2 No. 3 (2024)
316
Vol. 2 No. 3 (2024)
The approach for image dehazing and deraining is proposed by D. Chen et al. [24],
which utilizes a smoothed dilation technique to eliminate grid artifacts caused by
the dilated convolution. The features from various levels are fused employing
gated subnetworks. The image is improved by collecting information from
neighboring regions and fusing features from various levels. Mean square error
loss function is utilized to train the network with RESIDE dehazing benchmark
which contains synthetic images. Experimental results demonstrate that GCANet
accomplishes outstanding performance in single image dehazing. This CNN based
approach still possesses some limitations. The image possesses the less contextual
information. As the dilation rate rises, the information from the nearest elements of
the convolution kernel becomes highly varied leading to grid artifacts in the haze-
free results. Furthermore, this approach is not suitable to generate highly detailed
information.
317
Vol. 2 No. 3 (2024)
The deep fusion approach for single image dehazing was introduced by Z. Deng et
al. [25], which combines several dehazing models to separate layers to improve the
quality of hazy image. It comprises three stages to produce the final dehazed
image. Initially, the attentional feature integration module is formulated to improve
the incorporation of features at diverse convolution neural network layers, and
produce attentional multi-level integrated features. Subsequently, these features
are employed to produce a haze-free output using an atmospheric scattering
model and four haze-layer separation models. These outcomes are then combined
to generate the final dehazed image. In order to access the dehazing performance,
the network is compared with various image dehazing approaches using two
synthetic and real-world benchmark datasets. Experimental findings prove that the
suggested approach accomplishes outstanding dehazing performance. It generates
dehazed results with the improved image details and diminished artifacts.
A multi-scale approach with dense feature fusion was proposed by J. Pan et al. [26]
that leverages both local and global information for effective dehazing. The
proposed approach employs two principles such as boosting and error feedback to
solve the dehazing problem. With the incorporation of boosting strategy, the
network design is effective to recover the dehazed image. To enhance the network
performance, a dense feature fusion module integrates back-projection technique
in the network. This fusion assists to capture multi-scale details and improves the
representation power of the network. Experimental findings on different datasets
exhibit that the network accomplishes good performance in terms of dehazing
quality, while comparing to state-of-the-art approaches. The network eliminates
haze while preserving image details and generates visually pleasing results. The
boosting strategy and dense feature fusion module with back-projection technique
contribute to the overall success of the proposed approach.
The U-Net architecture for image dehazing was proposed by G. Fan et al. [27]. The
proposed network structure leverages depth information to improve the dehazing
process. It combines multi-scale depth maps at various stages employing encoder-
decoder structure with skip connections. The network captures both local and
global cues, enables more precise and comprehensive dehazing while fusing depth
information at multiple scales. The negative SSIM loss function is utilized to train
the network. The synthetic image dataset, NYUv2 depth dataset and Make3D
dataset are used to verify the approach. It ensures that haze-free images preserve
both visual and depth information. The experimental findings illustrate that the
network attains greater dehazing performance by incorporating the multi-scale
318
Vol. 2 No. 3 (2024)
depth information which removes haze, improves visibility, and generates high
quality haze-free images.
Single image dehazing using multi-scale approach was proposed by Z. Chen et al.
[28], which integrates both global and local features at various scales effectively. It
improves hazy images that are suffered from color distortion, reduced contrast,
and loss of fine details. The approach comprises two feature extraction modules
and one deep fusion module. The global features are computed in the global
feature extraction module which captures the overall scene transmission and
atmospheric light. Multi-scales are considered to handle object sizes and multiple
levels of haze. A deep fusion module is utilized to combine the global and local
features through skip connections, where the local features portray the image
contents. The fusion strategy integrates the complementary information from both
types of features, improving the overall dehazing performance. To train the
network, mean square error loss function is utilized to compute the difference
between the haze-free image and ground truth image. For experimental results,
artificially synthesized foggy images are used to train and evaluate the proposed
approach. Experimental findings demonstrate that the proposed approach
accomplishes significant improvements in terms of color fidelity, visibility, and
preservation of fine details when compared to other dehazing algorithms.
J. Xu et al. [29] presented the innovative approach for single image dehazing which
integrates the transformer and convolution neural network architectures. For
improving the dehazing capability, the network captures both the global and local
features using transformer-convolution hybrid layer. The adaptive fusion
mechanism accomplishes a trainable merging of the output findings from both the
swin-transformer and the optional convolution blocks. The five subsets of the
RESIDE dataset are employed to train the network and L1 loss function is employed
to ensure the generation of visually pleasing haze-free images. The experimental
findings illustrate that the suggested approach accomplishes superior performance
compared to existing dehazing approaches. It effectively eliminates haze, enhances
image visibility, and preserves image details. Moreover, the integration of
transformer and CNN architectures provides a synergistic effect, improving the
efficiency of the dehazing approach.
319
Vol. 2 No. 3 (2024)
Fig. 7 The Judgment of Hazy image, dehazed image, and multiple learned inputs
320
Vol. 2 No. 3 (2024)
321
Vol. 2 No. 3 (2024)
Transformer model to effectively recover haze-free images from hazy inputs. The
approach comprises two stages. In the first stage, a transformer-cnn codec is
developed to extract and merge both local and global features. An inter-block
supervision mechanism reduces the loss of feature information resulting from
upsampling and downsampling processes, thereby enriching the features. In the
next stage, the local features are extracted by the original resolution block
following the process of interaction and feature fusion. Furthermore, the
combination of shallow and deep features is facilitated by the integration of fusion
attention mechanism between the stages, thereby enhancing the learning
competence of the network. The network is trained employing joint loss function.
RESIDE, I-Haze, and O-Haze benchmark datasets are employed for training and
evaluating the proposed approach. Experimental findings illustrate that the
dehazing performance of proposed approach is greater as compared to various
other approaches.
The single image dehazing approach was suggested by S. Memon et al. [36]. The
approach integrates multi-stream features at three different resolution levels. The
attention mechanism is utilized to adaptively emphasize important features while
squashing inappropriate features. Deep semantic loss, smooth L1 loss, and
perceptual loss are utilized to compute the statistical variation between the
dehazed results and real images. For experimental findings, RESIDE and
externelcvpr are employed to train and assess the approach. The suggested
approach gets improved performance in terms of qualitative and quantitative
evaluation metrics on synthetic and real-world datasets. The approach effectively
removes haze from images, improves visibility and retains images with sharp
textural and structural details.
322
Vol. 2 No. 3 (2024)
Le-Anh Tran et al. [38]presented an approach for single image dehazing which
employs the transmission map extracted by adopting DCP as additional input to
the network. The approach employs encoder-decoder network architecture (U-Net),
spatial pyramid pooling module, and swish activation function to accomplish better
performance. The high-level features from the input hazy image are extracted and
analyzed by the encoder and an output haze-free image is generated by decoder.
To train the network, a combination of MSE loss, perceptual loss, and adversarial
loss are utilized to compute the difference between the dehazed outputs and the
equivalent haze-free images. For experimental findings, the four benchmark
datasets of hazy images such as Dense-Haze, I-Haze, O-Haze, and NH-Haze are
utilized to train and evaluate the approach. Experimental findings show that the
suggested approach enhances the visibility of hazy images, leading to improved
image quality and details as shown in Fig. 8.
323
Vol. 2 No. 3 (2024)
324
Vol. 2 No. 3 (2024)
Conclusion
This paper presents a review on deep learning-based image dehazing approaches.
The performance of various approaches is assessed by evaluating the quantitative
results using loss functions on synthetic images. The hierarchical feature fusion
network accomplishes the superior performance than other dehazing approaches.
The haze-free image preserves important details and produces more visually
realistic results. The mixed convolution attention and hierarchical feature fusion
contribute to improving visibility and eliminating haze efficiently. Further, the
advancements in deep learning-based approaches have improved the quality of
haze-free images. The exploration of various network architectures, attention
mechanisms and incorporation of generative adversarial networks have led to
distinguished progress in handling intricate scenes and challenging haze
conditions. The continued research in this field embraces great promise to further
improve the performance of single image dehazing approaches.
References
[1] S. Hong, M. Kim, and M. G. Kang, “Single image dehazing via atmospheric
scattering model-based image fusion,” Signal Processing, vol. 178, 2021, doi:
10.1016/j.sigpro.2020.107798.
[2] F. Guo, X. Zhao, J. Tang, H. Peng, L. Liu, and B. Zou, “Single image dehazing
based on fusion strategy,” Neurocomputing, no. xxxx, 2019, doi:
10.1016/j.neucom.2019.09.094.
[3] H. Wu et al., “Contrastive Learning for Compact Single Image Dehazing,” Proc.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, pp. 10546–
10555, 2021, doi: 10.1109/CVPR46437.2021.01041.
[4] R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph., vol. 34, no. 1, pp.
1–14, 2014, doi: 10.1145/2651362.
[5] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel
prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353,
2011, doi: 10.1109/TPAMI.2010.168.
[6] D. Ngo, G. D. Lee, and B. Kang, “Improved color attenuation prior for single-
image haze removal,” Appl. Sci., vol. 9, no. 19, 2019, doi: 10.3390/app9194011.
[7] Y. Wang, Z., Li, F., Cong, R., Bai, H., & Zhao, “Adaptive feature fusion network
based on boosted attention mechanism for single image dehazing,”
Multimed. Tools Appl., vol. 81, no. 8, pp. 11325–11339, 2022.
[8] M. Zheng, G. Qi, Z. Zhu, Y. Li, H. Wei, and Y. Liu, “Image Dehazing by An
Artificial Image Fusion Method based on Adaptive Structure Decomposition,”
vol. 14, no. 8, 2020, doi: 10.1109/JSEN.2020.2981719.
[9] Q. Liu, H. Zhang, M. Lin, and Y. Wu, “Research on image dehazing algorithms
325
Vol. 2 No. 3 (2024)
based on physical model,” 2011 Int. Conf. Multimed. Technol. ICMT 2011, pp.
467–470, 2011, doi: 10.1109/ICMT.2011.6003078.
[10] A. K. Tripathi and S. Mukhopadhyay, “Removal of fog from images: A review,”
IETE Tech. Rev. (Institution Electron. Telecommun. Eng. India), vol. 29, no. 2, pp.
148–156, 2012, doi: 10.4103/0256-4602.95386.
[11] M. K. Saggu and S. Singh, “A review on various haze removal techniques for
image processing.,” Int. J. Curr. Eng. Technol., vol. Vol 5, no. No 3, pp. 1500–
1505, 2015.
[12] J. P. Tarel, N. Hautière, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision
enhancement in homogeneous and heterogeneous fog,” IEEE Intell. Transp.
Syst. Mag., vol. 4, no. 2, pp. 6–20, 2012, doi: 10.1109/MITS.2012.2189969.
[13] D. Engin, A. Genc, and H. K. Ekenel, “Cycle-dehaze: Enhanced cyclegan for
single image dehazing,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern
Recognit. Work., vol. 2018-June, pp. 938–946, 2018, doi:
10.1109/CVPRW.2018.00127.
[14] A. A. Zhu, J.Y., Park, T.; Isola, P.; Efros, “Unpaired Image-to-Image Translation
using Cycle-Consistent Adversarial Networks Jun-Yan,” Proc. IEEE Int. Conf.
Comput. Vis., pp. 183–202, 2017,
[Online]. Available: http://link.springer.com/10.1007/978-1-60327-005-2_13
[15] Z. Liu, B. Xiao, M. Alrabeiah, K. Wang, and J. Chen, “Single Image Dehazing
with a Generic Model-Agnostic Convolutional Neural Network,” IEEE Signal
Process. Lett., vol. 26, no. 6, pp. 833–837, 2019, doi:
10.1109/LSP.2019.2910403.
[16] H. H. Yang, C. H. H. Yang, and Y. C. James Tsai, “Y-Net: Multi-Scale Feature
Aggregation Network with Wavelet Structure Similarity Loss Function for
Single Image Dehazing,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process.
- Proc., vol. 2020-May, no. 1, pp. 2628–2632, 2020, doi:
10.1109/ICASSP40776.2020.9053920.
[17] Y. Shao, L. Li, W. Ren, C. Gao, and N. Sang, “Domain adaptation for image
dehazing,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp.
2805–2814, 2020, doi: 10.1109/CVPR42600.2020.00288.
[18] A. Singh, A. Bhave, and D. K. Prasad, “Single image dehazing for a variety of
haze scenarios using back projected pyramid network,” arXiv:2008.06713v1,
pp. 1–16, 2020.
[19] Y. Lee and L. Wong, “IMAGE DEHAZING WITH CONTEXTUALIZED ATTENTIVE
U-NET Visual Processing Lab , Faculty of Computing and Informatics ,
Multimedia University , Malaysia,” pp. 2–6.
[20] S. A. Hovhannisyan, H. A. Gasparyan, S. S. Agaian, and A. Ghazaryan, “AED-
Net: A Single Image Dehazing,” IEEE Access, vol. 10, pp. 12465–12474, 2022,
326
Vol. 2 No. 3 (2024)
doi: 10.1109/ACCESS.2022.3144402.
[21] Y. Ma, J. Xu, F. Jia, W. Yan, Z. Liu, and M. Ni, “Single image dehazing using
generative adversarial networks based on an attention mechanism,” IET
Image Process., no. June 2021, pp. 1897–1907, 2022, doi: 10.1049/ipr2.12455.
[22] C. Y. Jeong, K. D. Moon, and M. Kim, “An end-to-end deep learning approach
for real-time single image dehazing,” J. Real-Time Image Process., vol. 20, no.
1, pp. 1–11, 2023, doi: 10.1007/s11554-023-01270-2.
[23] W. Ren et al., “Gated Fusion Network for Single Image Dehazing,” Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 3253–3261, 2018, doi:
10.1109/CVPR.2018.00343.
[24] D. Chen et al., “Gated context aggregation network for image dehazing and
deraining,” Proc. - 2019 IEEE Winter Conf. Appl. Comput. Vision, WACV 2019,
vol. 00151, no. 1, pp. 1375–1383, 2019, doi: 10.1109/WACV.2019.00151.
[25] Z. Deng and L. Zhu, “Deep Multi-Model Fusion for Single-Image Dehazing,”
Int. Conf. Comput. Vis., pp. 2453–2462, 2019, doi: 10.1109/ICCV.2019.00254.
[26] J. Pan, L. Xiang, Z. Hu, X. Zhang, and F. W. M. Yang, “Multi-Scale Boosted
Dehazing Network with Dense Feature Fusion,” pp. 2157–2167.
[27] G. Fan, Z. Hua, and J. Li, “Multi-scale depth information fusion network for
image dehazing,” Appl. Intell., vol. 51, no. 10, pp. 7262–7280, 2021, doi:
10.1007/s10489-021-02236-2.
[28] Z. Chen, H. Zhuang, J. Han, Y. Cui, and J. Deng, “Multi-scale single image
dehazing based on the fusion of global and local features,” IET Image
Process., vol. 16, no. 8, pp. 2049–2062, 2022, doi: 10.1049/ipr2.12467.
[29] J. Xu, Z. X. Chen, H. Luo, and Z. M. Lu, “An Efficient Dehazing Algorithm Based
on the Fusion of Transformer and Convolutional Neural Network,” Sensors,
vol. 23, no. 1, pp. 1–15, 2023, doi: 10.3390/s23010043.
[30] X. Liu, Y. Ma, Z. Shi, and J. Chen, “GridDehazeNet: Attention-based multi-
scale network for image dehazing,” Proc. IEEE Int. Conf. Comput. Vis., vol.
2019-Octob, pp. 7313–7322, 2019, doi: 10.1109/ICCV.2019.00741.
[31] D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf,
“Residual conv-deconv grid network for semantic segmentation,” Br. Mach.
Vis. Conf. 2017, BMVC 2017, 2017, doi: 10.5244/c.31.181.
[32] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-Net: Feature fusion attention
network for single image dehazing,” AAAI 2020 - 34th AAAI Conf. Artif. Intell.,
pp. 11908–11915, 2020, doi: 10.1609/aaai.v34i07.6865.
[33] X. Zhang, J. Wang, T. Wang, and R. Jiang, “Hierarchical Feature Fusion with
Mixed Convolution Attention for Single Image Dehazing,” IEEE Trans. Circuits
Syst. Video Technol., vol. 8215, no. c, pp. 1–13, 2021, doi:
10.1109/TCSVT.2021.3067062.
327
Vol. 2 No. 3 (2024)
[34] X. Zhu, S. Li, Y. Gan, Y. Zhang, and B. Sun, “Multi-Stream Fusion Network with
Generalized Smooth L1Loss for Single Image Dehazing,” IEEE Trans. Image
Process., vol. 30, pp. 7620–7635, 2021, doi: 10.1109/TIP.2021.3108022.
[35] X. Li, Z. Hua, and J. Li, “Two-stage single image dehazing network using swin-
transformer,” IET Image Process., vol. 16, no. 9, pp. 2518–2534, 2022, doi:
10.1049/ipr2.12506.
[36] S. Memon, R. H. Arain, and G. A. Mallah, “AMSFF-Net: Attention-Based Multi-
Stream Feature Fusion Network for Single Image Dehazing,” J. Vis. Commun.
Image Represent., vol. 90, no. August 2022, p. 103748, 2023, doi:
10.1016/j.jvcir.2022.103748.
[37] S. S. S. Pavan A, Adithya Bennur, Mohit Gaggar, “LCA-Net : Light
Convolutional Autoencoder for Image Dehazing,” arXiv:2008.10325v1, no. x,
2020.
[38] L. A. Tran, S. Moon, and D. C. Park, “A novel encoder-decoder network with
guided transmission map for single image dehazing,” Procedia Comput. Sci.,
vol. 204, pp. 682–689, 2022, doi: 10.1016/j.procs.2022.08.082.
328