Image Dehazing
Image Dehazing
i
                         TABLE OF CONTENTS
Certificate i
Abstract i
CHAPTER4: TESTING                                  31 - 41
4.1
      Testing Strategy                                31
4.2
      Test Cases and Outcome                          36
                                                              ii
CHAPTER 5: RESULTS AND EVALUATION   42 - 47
5.1
      Results                         42
                                              iii
        LIST OF TABLES
S.No.             Title           Page No.
1       Literature Review Table        6
2           Training Dataset          15
3           Testing Dataset           15
                                             iv
                 LIST OF FIGURES
S.No.                            Title                           Page No.
 1                          Project Design                         14
 2                      Network Architecture                       14
 3               Importing necessary python libraries              16
 4         Function to load train dataset from NYU2 dataset        16
 5        Creating Train dataset using NYU2 Depth dataset          17
 6             Checking and validating created dataset             17
 7            Printing the information of loaded datasets          18
 8              Output information of loaded datasets              18
 9                Refinement of Transmission Maps                  18
 10       Extraction and processing of outdoor image dataset       19
 11       Extraction and processing of indoor image dataset        19
 12             Learning Rate decay schedule function              20
 13                    Display the information                     20
 14                  Output of the above snippet                   21
 15     Creating neural networks to estimate transmission maps     21
 16                     Compiling the model                        22
 17                 Compiled Transmission Model                    22
 18                      Transmission Model                        23
 19                      Trans Model Shape                         24
 20                Training of transmission model                  24
 21                      Importing Libraries                       25
 22            Learning Rate Decay Schedule Function               26
 23                  Displaying the Information                    26
 24                 Training Dataset Information                   26
 25       Computation of residual_input and residual_output        27
 26                Residual Based Network Model                    27
 27                 Compiling the residual model                   28
 28          Count of trainable and non-trainable process          28
 29          Training the residual-based Network Model             28
 30             Function to perform dehazing of image              29
                                                                        v
31                  Streamlit application to upload image                    29
32   Importing necessary python libraries python libraries for the testing   31
                               of our model
33                            Util Functions                                 32
34                           Transmission Model                              33
35                             Residual Model                                34
36                         Haze Removal Function                             35
37                        Output of Trained Model                            35
38               Output Image based on pre-trained models                    36
39                       Iteration for image dehazing                        36
40                   Initial preprocessing of input image                    36
41                        Predict Transmission Map                           37
42                          Residual Model Input                             37
43                        Predicting Residual Image                          37
44                        Generate Dehazed Image                             38
45                            Plotting of Images                             38
46                        Input Image for Dehazing                           39
47                           Transmission Map                                39
48                       Refined Transmission Map                            40
49                      Residual Model Input Image                           40
50                      Residual Model Output Image                          41
51                       Generated Haze Free Image                           41
52                             Saving of Image                               41
53             Plotting the model loss and model learning rate               42
54                          Model Learning Rate                              42
55                               Model Loss                                  43
56                      Saving the model and weights                         43
57             Plotting the model loss and model learning rate               44
58                          Model Learning Rate                              44
59                               Model Loss                                  45
60                     Saving the Model and Weights                          45
                                                                                  vi
62   Dehazed Image generated by the web application   46
                                                           vii
               LIST OF ABBREVIATIONS
                                                            viii
                                       ABSTRACT
Haze causes a loss of contrast, colour saturation, and overall sharpness by giving images a
milky or fog-like appearance. It is possible for distant objects to appear faded and less distinct,
and the image's overall visual quality may be severely diminished. Haze is a frequent problem
in photography and computer vision, and it can be brought on by a variety of things, including
atmospheric conditions (such fog or mist), pollution, and even the scattering of natural light.
Haze and air scattering have a major negative impact on image quality, visibility, and a
number of computer vision applications, including scene analysis, object recognition, and
object detection in outdoor photographs. Due to the complicated and non-linear haze creation
and distribution, traditional image dehazing techniques frequently fail to remove haze and
restore crisp details. A powerful and effective image dehazing approach that can precisely
retrieve the real scene content from hazy photographs utilizing cutting-edge deep learning
methodologies is urgently needed.
The main goal of this project is to create and put into use a residual-based deep CNN
architecture that can recognize and take advantage of intricate correlations between hazy and
clear images.
                                                                                                  ix
               CHAPTER – 1 INTRODUCTION
1.1 INTRODUCTION
Haze is created when a lot of atmospheric particles or water droplets scatter or absorb light
that passes through them. In addition to having severe colour attenuation low contrast and
saturation and poor visual effect, images taken in haze and other weather conditions also
have an impact on other weather condition also have an impact on other system that
depends on optical imaging equipment, including target recognition and outdoor
monitoring systems, satellite remote sensing systems and aerial photography systems. It
presents several challenges for the goal of the research. Therefore, there is a pressing and
practical need for efficient dehazing and sharpness recovery in order to enhance the quality
of haze-degraded images and lessen the influence of haze and other meteorological
circumstances on outdoor imaging systems.
                                                                                          1
describes the fog image as the superposition of scene radiation and scattering effects. The
physical model can be described as follows:
Where x is the input image pixel point. The initial hazy picture input in denoted by I(x).
J(x) represents the restored and dehazed image. A is the value of ambient atmosphere light.
The distance between the object and the camera is represented by d(x) and t(x) is the
optical route propagation map. T(x) shows exponential attenuation with the images
depth d(x). The
`scattering capacity of light per unit volume of the atmosphere is represented by the
atmospheric scattering coefficient, β which typically taken a smaller constant.
Haze has consequence even in the world of faraway views, as object become blurry blends
into one another making them appear fading and undefined. This tendency is made worse
by meteorological circumstances like fog or mist, which throw off the observes ability to
precisely distinguish feature and produce a visually chaotic scene. Airborne particles add to
the haze and detract from the picturesque splendour that the lens captures so pollution also
play a part in this visual compromise.
Natural light dispersion typically adds haze to the mix of the light and atmosphere making
it difficult to sharp picture. In photography and computer vision, haze is a persistent
challenge that demands careful approaches to maintain visual integrity. The hidden beauty
beneath the ethereal veil of haze must be revealed by photographers and technologists
navigating atmospheric challenges with a delicate mix of techniques and technologies.
                                                                                          2
1.3 OBJECTIVES
The primary objective of our project is to eliminate or minimize the haze which is a result
of particles in the atmosphere in order to enhance the contrast and clarity and also to
develop and validate an advanced image and video dehazing algorithm using deep
convolutional neural network (CNN) with a residual based architecture. The key goals are
as follows: -
   Development of Image and video dehazing model
   Implement a residual network in the second phase to leverage the ratio of foggy images
    or video frames and the previously estimated transmission map for efficient removal of
    haze
   Transmission Map estimation: - We will be developing a mechanism within our
    network which will be accurately estimating the transmission map from hazy images or
    video frames in the initial phase of our project's dehazing process.
   Real world applicability: - We will also try to assess the algorithm’s potential for real-
    world applications where the atmospheric conditions pose challenges to visual
    perception.
   Residual based dehazing: - We will also be implementing a residual network in the
    second phase of our project to leverage the ratio of foggy images or video frames and
    the previously estimated transmission map for efficient removal of haze
   Haze Removal: - Our general objective is to improve the visibility of objects and
    scenes by reducing or removing haze to enhance the overall robustness.
   Adaptability: - Our dehazing algorithm should work well and be able to handle
    different lighting scenarios as well as to adjust the varying air conditions and haze
    levels.
   Testing and validating the model: - The testing and validation of the model is an
    essential stage of our project. So, by contrasting the model predictions with a set of test
    data we can easily evaluate our model’s performance. We will be utilizing the NYU2
    depth dataset for training the deep neural network and evaluate mode’s performance via
    various metrics.
                                                                                            3
In our visual world, this project aims to redefine perception by addressing the challenge of haze
    affecting image and video clarity. It seeks to innovate and enhance technological
    capabilities to overcome atmospheric obstacles that hinders our ability to interpret visual
    data. The project is motivated by the human desire for clearer site, whether in a literal or
    metaphorical terms, and also aims to remove visual hindrances caused by weather
    conditions. The proposed to stage network model, featuring a deep, Convolutional neural
    network (CNN) architecture, represents a significant advancement in deep learning
    techniques. It challenges, conventional methods and aims to revolutionize image and video
    dehazing, pushing the boundaries of what can be achieved through human ingenuity and
    computational power. The strategic utilization of the NYU2 depth dataset is deliberate
    decision to ground the project in real-world data, ensuring that the solutions developed are
    applicable beyond controlled environments.
This dataset serves as a connection between theoretical brilliance and practical implementation.
    Beyond algorithms and data sets. This project also holds the potential for societal impact. It
    envisions a world where clearer satellite imagery aids, disaster response efforts and where
    autonomous vehicles navigate an obstructed landscape.
                                                                                       5
          CHAPTER – 2 LITERATURE SURVEY
Furthermore, Chaitanya and Snehasis [3] proposed a solution that is effective for both
indoor and outdoor hazy photographs. However, it faces difficulties in reliably detecting
the true hue of ground truth photographs in locations with high cloud thickness. In
contrast, Li et al
[4] provided with a semi-supervised method for effectively learning the domain gap
between synthetic data and real-world images. However, its efficacy decreases under high-
haze settings.
                                                                                               7
                                                            while
                                                            dehazing
                                                            them.
4.   Semi-        IEEE             NYU    Depth     dataset This shows        The     model
     Supervised   Transactions     RESIDE-C,      HazeRD, that           the performs
     Image        on        Image and SOTS datasets         proposed          less
     Dehazing     Processing (                              semi-             effective
     [4]          Volume: 29)                               supervised        when        the
                  (2019)                                    method        is image
                                                            effective in      suffers from
                                                            learning the      severe haze
                                                            domain gap
                                                            between
                                                            synthetic
                                                            data        and
                                                            real-world
                                                            images, thus
                                                            alleviating
                                                            the       over-
                                                            fitting
                                                            problems
5.   Recursive    2018             Deep Residual Learning   DRL               DRL
     Deep         IEEE/CVF         (DRL) Network for        outperforms       method was
     Residual     Conference       Image Dehazing           all others        only
     Learning     on Computer      NYU-Depth V2 dataset     methods by        marginally
     for Single   Vision and                                a large           slower than
     Image        Pattern                                   margin.           AOD(All-
     Dehazing     Recognition                                                 in-One
     [5]          Workshops                                                   Dehazing)
6.   A            IEEE Access      Atmospheric Scattering The                 The     model
     Cascaded     ( Volume: 6)     Model and Cascaded proposed                is     existing
     Convolutio   (2018)           CNN model                method            image
     nal Neural                    NYU-V2 Depth dataset     outperforms       artifacts and
                                                                                           8
     Network                                                the state-of-   noise
     for Single                                             the-art         because the
     Image                                                  methods         training
     Dehazing                                               both on the     dataset      is
     [6]                                                    synthetic       generated
                                                            and             based on the
                                                            real- world     atmospheric
                                                                      hazy scattering
                                                            images.         model
                                                                            which does
                                                                            not        take
                                                                            artifacts and
                                                                            noise
                                                                            into
                                                                            account
7.   Single       IEEE             Ranking-CNN              Ranking         Efficiency
     Image        Transactions     Dataset                  CNN Model       of          the
     Dehazing     on               1) Dataset-Syn - Dense comes out         Ranking
     Using        Multimedia                 Haze           to be     more CNN is less
     Ranking      (Volume: 20,     2) Dataset-Cap - Light   effective       than      other
     Convolutio   Issue: 6, June             Haze           than            models.
     nal Neural   2018)                                     classical
     Network                                                CNN as it
     [7]                                                    can capture
                                                                                         9
      Network        on Computer                              medium        or
      [8]            Vision                                   heavy), our
                                                              AOD          net
                                                              model
                                                              constantly
                                                              improves
                                                              detection,
                                                              surpassing
                                                              both      naive
                                                              Faster       R-
                                                              CNN         and
                                                              non-joint
                                                              approaches
 9.   A Research Journal         of Dark    Channel   Prior The                  Lack of a
      on     Single Computer        Single Image Dehazing     proposed           big dataset
      Image          and            Dataset was prepared by model
      Dehazing       Communicati    the authors               achieves the
      Algorithms     ons > Vol.4                              highest
      Based       on No.2,                                    SSIM value
      Dark           February                                 and       hence
      Channel        2016                                     attains        a
      Prior [9]                                               dehazing
                                                              accuracy
                                                              that      brings
                                                              the      output
                                                              image
                                                              closest       to
                                                              the       haze-
                                                              free image.
                                                                                               10
   Diversity in datasets: - We found that many papers relied on a narrow or limited range
    of datasets selection such as NYU depth and Synthetic Objective Testing Set, which
    may not completely represent the diversity of real-world hazy scenarios. Many
    researchers expressed the need or requirement for more varied and realistic datasets to
    ensure the robustness and efficiency of dehazing algorithms across a wide range of
    conditions.
   Discrepancy in Algorithm and Dataset: - Several articles highlighted gaps in the size
    of datasets and the complexity of algorithms. In some algorithms, the model was not
    that much effective because of need of larger datasets and the dataset was smaller.
                                                                                             11
         CHAPTER 3: SYSTEM DEVELOPMENT
   3.1 REQUIREMENTS AND ANALYSIS
   FUNCTIONAL REQUIREMENTS
      Image upload and its processing
            i.      The software we will be needing should be able to handle the uploaded
                    images and also be able to save it for future processing.
            ii.     We must be able to process the uploaded image into our model for
                    extracting of necessary features and generate an output.
      Image Surrounding identification
             i.     The model should be able to dehaze the image irrespective of the place it
                    has been taken i.e., Outdoor or Indoor.
            ii.     The model should be able to adjust according to the type of image and after
                    analysing should be able to process and dehaze it and further generate
                    output.
      Real-time processing - The system should be able to process the image and produce the
       output in minimum time, thus increasing its overall useability.
      Training mode - The system should be able to retrain the model with new data when
       available.
       1. NON-FUNCTIONAL REQUIREMENTS
      Performance - The System should be able to dehaze the input image under 2 sec of time.
      Reliability - The system should attain a minimum possible model loss.
      Scalability - The model should be refined enough so that it can be scaled up to use in a
       website or android apps.
      Compatibility - The model should be compatible with the front-end frameworks in
       python like Flask, for the development of a website, where the user can upload the
       hazed image and will get the dehazed image from there.
      Maintainability - The codebase of the model and front-end should follow the industry
       best practices to increase readability and understandability.
      Training Dataset
             i.     We have used two separate datasets for training and testing of the model.
                                                                                                12
        ii.    The dataset includes both indoor and outdoor images.
     Image resolution - Model supports for all the common types of the image formats
      like JPEG, PNG, JPG.
     Validation and testing - Testing on the testing dataset and producing the output.
  2. HARDWARE REQUIREMENTS
     GPU(s): - High Performance GPUs are needed for efficient training like NVIDIA
      Maxwell GPU.
     CPU - A multicore-CPU is required so that it can efficiently train the model and
      handle its oppression like Quad-Core ARM Cortex-A57 Processor.
     RAM - A good amount of memory RAM is required such as 16GB LPDDR4.
     Storage - Adequate amount of storage is needed so that it can save all the necessary
      checkpoints while training of the model, and also to save some of the necessary
      files.
  3. SOFTWARE REQUIREMENTS
     Language - Python is used throughout the project for the implementation.
     Deep Learning Frameworks - We have to choose suitable deep learning frameworks
      like TensorFlow and Keras for the training and testing of the models.
     Version Control - For the version control we will be using Git And we’ll be pushing
      the code into the Github.
     Development Environment - The development environment used for training and
      testing of the model is Jupyter notebook and vs code.
                                                                                          13
In this project we will we following the below project design where we will be first processing
    the dataset and after that training the model with network architecture given below in Fig 2
                                                                                              14
3.3 DATA PREPARATION
After surveying many existing similar models and research papers we have observed that
many of them are using publicly available dataset – NYU2 Depth Dataset and Reside
Standard Dataset.
We will be using separate datasets for training and testing of the model like: -
1) NYU2 DEPTH DATASET - (TRAINING DATASET)
                                                                                             15
3.3.2 DATA PREPROCESSING
In order to train a better model and expect a good result and accuracy, we will be doing
preprocessing on the dataset.
TRAINING DATASET
In this snippet, we will be importing all the necessary libraries needed to load and process
the data.
In this code snippet, we are creating train dataset using NYU2 Depth Dataset. Firstly we
are loading the dataset using the previous function ‘load_train_dataset()’, after loading we
are creating a hdf5 file for the training data inside which we are creating 3 datasets – Clear
Images, Transmission Value, and Haze Images.
Here, we are displaying the 1000th clear image, haze image, its associated transmission
value and the shapes of clear and haze image.
                                                                                           17
                   Fig. 7 Printing the information of the loaded datasets
In this, an instance of transmission model is created and loaded with pre-trained weights,
the hazy images are then padded symmetrically to handle edge effects during convolution.
TESTING DATASET
                                                                                             18
                  Fig. 10 Extraction and Processing of Outdoor image Dataset
In this snippet, we are creating the Outside Images from the SOTS subset of the RESIDE dataset,
     saving it with the name of hazy as well its corresponding clear image with a suffix of
     ‘_clean’ in order for us to test the models at later stages.
In this snippet, we are doing the same work for the Indoor Images from the SOTS subset of the
     RESIDE dataset.
     3.4 IMPLEMENTATION
For the implementation part we will be using the Transmission Network Model and
     Residual-Based Network.
The transmission network model is a type of neural network utilized to estimate the transmission
     maps in images.
                                                                                                19
      Input-Output Mapping – Primary function is to map input data and
       it corresponding output data.
      Supervised or Unsupervised Learning – It supports both Supervised as well as
       unsupervised learning.
      Loss Functions – While training TNMs, it can optimize their parameters
       by minimizing loss function
      Robustness – These are robust.
For this project, we are using the concept of Transmission Network Model, to create
transmission maps between input haze image and output clear image.
In this function, we are initially loading the train dataset from the mj.hdf5 file created
earlier. After that the we define the weight initialization strategy using a gaussian
distribution with a mean of 0 and a standard deviation of 0.001. We are also scheduling the
learning rate decay schedule i.e., learning rate is halved at epochs 49 and 99.
                                                                                        20
                             Fig. 14 Output of the above snippet
In this we are constructing neural networks to estimate transmission maps between input
and output. In this we are making a total of 18 layers which are:-
      1 Input Layer
      2 Convolution Block
      4 Lambda Layers
      1 Maximum Layer
      6 Multi-Scale Convolutional Blocks
      1 Concatenate layer
      1 MaxPooling2D Layer
      2 Convolutional Block
                                                                                          21
    Fig. 16 Compiling the Model
[A]
[B]
                                      22
Fig. 18 Transmission Model
                             23
                               Fig. 19 Trans Model Shape
      150 - Epochs
      30 – Batch Size
                                                                           24
          LearingRateScheduler – utilized to dynamically adjust the learning rate during
           training.
It is a type CNN architecture, which was created to solve the problems the vanishing and
   exploding gradient issues that frequently arise with the deeper networks. Residual based
   learning has following key features: -
          Residual Learning – It uses residual blocks, also known as skip connection is the
           essential component of a ResNet. It also facilitates the optimization of deep
           networks by enabling activations to bypass one or more layers via a shortcut link.
          Identify Shortcut Connections – It introduced us with the concept of bypassing or
           skipping one or more layers. Primary idea behind this is that it is easier to optimize
           residual mapping which is the difference between the input and output rather than
           the desired output.
          Deep Architectures – Using these residual blocks, ResNet can be built with 50,
           101 and 152 layers, but more deeper variants are available.
For this project, we are using the concept of learning residual information, representing the
   difference between the hazy and clear images.
                                                                                                25
                      Fig. 22 Learning Rate Decay Schedule function
In this function, we are initially loading the train dataset from the mj.hdf5 file created
earlier. After that the we define the weight initialization strategy using a gaussian
distribution with a mean of 0 and a standard deviation of 0.001. We are also scheduling the
learning rate decay schedule i.e., learning rate is halved at epochs 49 and 99.
                                                                                        26
                Fig. 25 Computation of residual_input and residual_output
In this first we compute the residual_input, which is computed by normalizing the haze-
image. Similarly, the residual_output is computed by subtracting the clean-images from the
residual input, after which it is clipped to ensure it stays within the [0, 1] range.
Residual_Output represents the difference between residual_input and the clean image.
In the residualModel() function, we are defining the residual model using Keras and
Tensorflow.
                                                                                        27
                           Fig. 27 Compiling the Residual Model
Here, we have created and compiled our Residual Based Network model consisting of :
      1 Convolutional Block
      17 Residual Block
      150 - Epochs
      30 – Batch Size
      LearingRateScheduler – utilized to dynamically adjust the learning rate during
       training.
In the project, a user-friendly web interface using Streamlit, which is a python framework
for building interactive web applications. This interface allows the users to upload the
hazed images and the web app will provide them with the dehazed image in real-time.
                                                                                       28
                      Fig. 30 Function to Perform dehazing of Image
In this code snippet, the code implemented the function to dehaze the given image which
has come for dehazing.
In this above snippet, the code implemented the streamlit frontend web application with the
feature of uploading the images form the device by the user.
                                                                                          29
   One of the major challenges was that the quality of image is hugely varied since it
    has both the indoor and outdoor images, thus creating the difference in lighting
    and contrast conditions. Therefore, to create a model which is capable to dehaze
    both indoor and outdoor image with the ability to retain its more original colours.
   Since, it is a CNN it requires a heavy computational power to train the model and
    with the use of more and more big dataset consisting of more images we will be
    needing a heavy GPU enabled machine to train it efficiently and faster.
                                                                                          30
                        CHAPTER – 4 TESTING
Following the successful training of the model, the testing and validation phase will begin by
   evaluating the model's performance on randomly selected photos from the dataset.
Fig: - 32 Importing necessary python libraries for the testing of our model
This snippet imports various libraries for the testing our model. Here we imported numpy for
   numerical computing, h5py for interacting with HDF5 files and it is also used for storing
   large amounts of numerical data. Math library is for basic mathematical operations, keras is
   for high level neural networks API, tensorflow is for open-source machine learning
   framework, matplotlib for creating visualisations, PIL for opening, manipulating and
   saving different image file formats. This architecture includes convolutional layers, batch
   normalisation, activation functions. We set up an optimizer (Stochastic Gradient Descent)
   and a custom callback which we can use to monitor and control our model’s training
   process. We configuring keras to use a specific image data format, likely ‘channels_last’.
   We used
                                                                                            31
‘%matplotlib inline’ for displaying plots directly below the code cell whenever we will run
it.
This snippet defines two functions, ‘Guidedfilter’ and ‘TransmissionRefine’ which are
important components in context of an image processing and they are possibly associated
with tasks like image/video dehazing. The Guidedfilter function implemented a technique
commonly used for smoothing images while preserving those important edges by taking an
imput image Im, a guidance image p, a window radius r and a regularisation parameter eps.
This function calculates local mean, covariance and filters to produce a guided-filter output
                                                                                          32
as q. Now our second function Transmission Refine refines a transmission map (et) by
using the previously defined guided filter. The image input Im is converted to grayscale,
normalised and then subjected to our guided filter process to enhance the transmission map.
These two functions basically serve as essential key steps in a broader image processing
pipelines which results in improvement of image quality by reducing or removing noise
and refining transmission. Information for subsequent applications.
This code has a neural network architecture known as residual neural network (ResNet)
using a python library known as keras. Here, the ResNet is designed to learn important
features within the input image very efficiently. The ‘ResidualBlock function’
defines as the
                                                                                        33
fundamental building block of our network via batch normalisation, a 3x3 convolutional
layer with a residual connection and this block will be applied iteratively in the Residual
Model function where initially another convolutional block which is followed by 17
residual blocks to capture and retain crucial features. The model generates a 3- channel
output and a Rectified Linear Unit (ReLU) activation. The model is tuned to enhance the
ability to learn and represent patterns in the input data with a sole focus on feature
extraction for our image processing task at hand.
This code has a neural network architecture known as residual neural network (ResNet)
using a python library known as keras. Here, the ResNet is designed to learn important
features within the input image very efficiently. The ‘ResidualBlock function’ defines as
the fundamental building block of our network via batch normalisation, a 3x3
convolutional layer with a residual connection and this block will be applied iteratively in
the ResidualModel function where initially another convolutional block which is followed
by 17 residual blocks to capture and retain crucial features. The model generates a 3-
channel output and a Rectified Linear Unit (ReLU) activation.The model is tuned to
enhance the ability to learn and represent patterns in the input data with a sole focus on
feature extraction for our image processing task at hand.
                                                                                         34
                             Fig: - 36 Haze removal function
This code has a function ‘dehaze_image’ which is designed for image/video dehazing using
a deep neural network. It begins with loading an input image, normalising its pixel values
and padding them symmetrically. We will then be utilising two models which we have
already pre trained. The ‘TransmissionRefine’ function will enhance the accuracy of the
transmission estimation. The dehazing process will continue with the creation of a residual
map which will further represent the difference between the original input and the refined
transmission. The second model ‘ResidualModel’ will refine the residual map further and
the resulting haze-free image will be obtained by subtracting the refined map from the
original input. This will be done with the final output constrained to pixel values between 0
and 1.
This code segment includes an input image, and the system generates the dehazed output
image using two pre-trained models.
                                                                                          35
                   Fig:- 38 Output image based on pre-trained models
Here the code will iterate from 0 to 22 and for each iteration it will check whether the
current index matches with some certain predetermined values (0,5,8,12,13,15 or 20). If it
does then it will skip the loop otherwise it will proceed with the dehazing process. The
‘dehaze_image’ function is applied to images loaded from the directory and the results of
dehazed images will be saved in a different directory with new and modified file name.
                                                                                         36
                           Fig: - 41 Predict Transmission Map
Here in this code the image in converted into a NumPy array, representing its pixel values
and normalising them by dividing each pixel value by 255.0 for ensuring so that the values
are in the range of 0 to 1. Then the image is symmetrically padded with a border of 7 pixels
on the top, bottom, left and right sides. This is being done to accommodate the
convolutional operations which may be applied during subsequent image processing task.
Here in this code a model is employed for estimating transmission maps. The model initialises and
loads, pre-trained weights for a deep learning model, which is known as TransmissionModel and
we designed this to estimate transmission maps for our dehazing images. We applied this to an
input image and resulting trans map is refined using our function which is TransmissionRefine
function. These are some essential steps for effective dehazing processing in our overall image
enhancement pipeline.
Here this code calculates a residual map by dividing our original input image via refined
transmission map. This code is basically expanding the dimensions of refined trans map.
Here this code initialises and loads our pre-trained weights for our Residual Model so that
they can do the refining of residual maps in the dehazing process. Our model is then
applied to input residual map so that we can obtain a refined output. This code captures the
residual information of the image.
                                                                                              37
                             Fig:- 44 Generate Dehazed Image
This code calculates a haze free image by subtracting our refined residual map from the
original input map. After this step is done, the image is then clipped in pixel value between
0 to 1. Further doing these steps we get our dehazed image/video as our desired output
which enhances the overall image quality via removing some haze artifacts.
                                                                                    39
Fig:- 48 Refined Transmission Map
                                      40
                          Fig:- 50 Residual Model Output Image
Here this code will save all images which were given by the model as an output. Code is
using ‘np.clip’ function which will ensure the pixel values are within valid range of 0 to 1
before saving them.
                                                                                         41
    CHAPTER 5: RESULTS AND EVALUATION
5.1 RESULTS
After successful training of the both the networks we will be validating the models and see
how they are performing the testing dataset.
 Model Learning Rate – It shows the learning rate of the model over the epochs.
                                                                                         42
      Model Loss – It shows the Model Loss over the epochs.
Here, the transmission model along with the weights are being saved.
                                                                       43
                  Fig. 57 Plotting the Model Loss and Model Learning Rate
 Model Learning Rate – It shows the learning rate of the model over the epochs.
                                                                                              44
                                          Fig. 59 Model Loss
         Here, the Residual model along with the weights are being
                                                            saved.
After successfully training and testing both models, the implementation of the web
   application is done and easily integrated the models within its framework.
                                                                                     45
Fig 61 Home Page of our Web Application
                                          46
                  Fig 62 Dehazed Image generated by the web application
Here, it is clear that the web application is working properly, efficiently transforming the
foggy image into a dehazed one.
Since it was demonstrated in Chapter 4 (4.2) that the trained model is capable of efficiently
dehazing the image.
                                                                                               47
   Requirement for the model to be able to dehaze both indoor and outdoor images is also
    being satisfied by the model.
   The model is fast enough to be able to dehaze the image within 2 sec of time after
    being uploaded to it.
                                                                                         48
                                    CHAPTER - 6
            CONCLUSIONS AND FUTURE SCOPE
   6.1 CONCLUSION
According to the project report, the team successfully created a single image dehazing solution
   based on a residual-based deep convolutional neural network (CNN). This method
   significantly eliminates the necessity for atmospheric light estimate, increasing the
   effectiveness of image dehazing. The network model is divided into two phases, with the
   residual network in charge of training the ambient light values. Extensive testing was
   carried out on both the NYU2 depth dataset and the RESIDE dataset, and the suggested
   model outperformed previous methods in terms of qualitative evaluation criteria. Notably,
   the model demonstrated significant effectiveness in dehazing varied settings, with little
   color distortion or image blurring seen. The results closely matched accepted criteria,
   demonstrating the efficacy of the proposed approach.
                                                                                             49
                               REFERENCES
[1] H. Ullah et al., "Light-DehazeNet: A Novel Lightweight CNN Architecture for Single
Image Dehazing," in IEEE Transactions on Image Processing, vol. 30, pp. 8968-8982,
2021, doi: 10.1109/TIP.2021.3116790.
[2] X. Zhang, T. Wang, W. Luo and P. Huang, "Multi-Level Fusion and Attention-Guided
CNN for Image Dehazing," in IEEE Transactions on Circuits and Systems for Video
Technology,     vol.    31,    no.     11,    pp.   4162-4173,     Nov.      2021,   doi:
10.1109/TCSVT.2020.3046625.
[3] B.S.N.V. Chaitanya, Snehasis Mukherjee, Single image dehazing using improved
cycleGAN, Journal of Visual Communication and Image Representation, Volume 74,
2021, 103014, ISSN 1047-3203, https://doi.org/10.1016/j.jvcir.2020.103014.
[5] Y. Du and X. Li, "Recursive Deep Residual Learning for Single Image Dehazing,"
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
(CVPRW),      Salt     Lake   City,     UT,    USA,    2018,     pp.   843-8437,     doi:
10.1109/CVPRW.2018.00116.
[6] C. Li, J. Guo, F. Porikli, H. Fu and Y. Pang, "A Cascaded Convolutional Neural
Network for Single Image Dehazing," in IEEE Access, vol. 6, pp. 24877-24887, 2018, doi:
10.1109/ACCESS.2018.2818882.
[7] Y. Song, J. Li, X. Wang and X. Chen, "Single Image Dehazing Using Ranking
Convolutional Neural Network," in IEEE Transactions on Multimedia, vol. 20, no. 6, pp.
1548-1560, June 2018, doi: 10.1109/TMM.2017.2771472.
[9] Alharbi, E. , Ge, P. and Wang, H. (2016) A Research on Single Image Dehazing
Algorithms Based on Dark Channel Prior. Journal of Computer and Communications, 4,
47-
55. doi: 10.4236/jcc.2016.42006.
                                                                                      50
[10] Yeh, C.-H. et al. (2018) ‘Single image dehazing via Deep Learning-based image
restoration’, 2018 Asia-Pacific Signal and Information Processing Association Annual
Summit and Conference (APSIPA ASC) [Preprint]. doi:10.23919/apsipa.2018.8659733.
[11] Ren, W. et al. (2016) ‘Single image dehazing via multi-scale convolutional Neural
Networks’,Computer Vision–ECCV 2016, pp. 154–169. doi:10.1007/978-3-319-46475-
6_10.
[12] Kim, J.-H. et al. (2013) ‘Optimized contrast enhancement for real-time image and
video dehazing’, Journal of Visual Communication and Image Representation, 24(3), pp.
410–425. doi:10.1016/j.jvcir.2013.02.004.
[13] Hodges, C., Bennamoun, M. and Rahmani, H. (2019) ‘Single image dehazing using
Deep      Neural     Networks’,     Pattern   Recognition   Letters,    128,        pp.     70–77.
doi:10.1016/j.patrec.2019.08.013.
[14] Sahu, G. et al. (2021) ‘Single image dehazing using a new color channel’, Journal of
Visual     Communication          and    Image     Representation,      74,        p.      103008.
doi:10.1016/j.jvcir.2020.103008.
[15] Jiang, Y. et al. (2017) ‘Image dehazing using adaptive bi-channel priors on
superpixels’,      Computer   Vision    and   Image   Understanding,        165,    pp.     17–32.
doi:10.1016/j.cviu.2017.10.014.
[16] Rong, Z. and Jun, W.L. (2014) ‘Improved wavelet transform algorithm for single
image dehazing’, Optik, 125(13), pp. 3064–3066. doi:10.1016/j.ijleo.2013.12.077.
[17] Gao, Y. et al. (2014) ‘A fast image dehazing algorithm based on negative correction’,
Signal Processing, 103, pp. 380–398. doi:10.1016/j.sigpro.2014.02.016.
[18] Yin, S., Wang, Y. and Yang, Y.-H. (2021) ‘Attentive U-recurrent encoder-decoder
network      for      image    dehazing’,     Neurocomputing,        437,     pp.         143–156.
doi:10.1016/j.neucom.2020.12.081.
                                                                                                51
     [19] Feng, T. et al. (2021) ‘URNet: A U-net based residual network for image dehazing’,
  Applied Soft Computing, 102, p. 106884. doi:10.1016/j.asoc.2020.106884.
     [20] Ashwini, K., Nenavath, H. and Jatoth, R.K. (2022) ‘Image and video dehazing based
     on transmission estimation and refinement using Jaya algorithm’, Optik, 265, p. 169565.
     doi:10.1016/j.ijleo.2022.169565.
     [21] Ren, W. and Cao, X. (2018) ‘Deep Video Dehazing’, Advances in Multimedia
     Information Processing – PCM 2017, pp. 14–24. doi:10.1007/978-3-319-77380-3_2.
     [22] Lv, X., Chen, W. and Shen, I. (2010) ‘Real-time Dehazing for image and video’, 2010
     18th   Pacific   Conference   on   Computer    Graphics   and   Applications   [Preprint].
     doi:10.1109/pacificgraphics.2010.16.
     [23] Goncalves, L.T. et al. (2017) ‘Deepdive: An end-to-end Dehazing method using Deep
     Learning’, 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images
     (SIBGRAPI) [Preprint]. doi:10.1109/sibgrapi.2017.64.
     [24] Zhang, X. et al. (2021) ‘Learning to restore hazy video: A new real-world dataset and
     a new method’, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
     (CVPR) [Preprint]. doi:10.1109/cvpr46437.2021.00912.
     [25] Zhang, S., He, F. and Yao, J. (2018) ‘Single image dehazing using Deep Convolution
     Neural Networks’, Advances in Multimedia Information Processing – PCM 2017, pp. 128–
  137. doi:10.1007/978-3-319-77380-3_13.
Dehazing