Development of a Side Scan Sonar Module
for the UnderWater Simulator
Dae-Hyeon Gwon1 , Joowan Kim2 , Moon Hwan Kim3 , Ho Gyu Park4 , Tae Yeong Kim5 , Ayoung Kim∗
1,2,∗
Dept. of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
(Tel : +82-42-350-3672; E-mail: [kdhx13,jw kim,ayoungk]@kaist.ac.kr)
3,4,5
LIG Nex1 Maritime Research Center, Seongnam-si, Gyeonggi-do, 13488, Korea
(Tel : +82-31-8026-7000; E-mail: [moonanikim,hgpark77,taeyeong.kim]@lignex1.com)
Abstract—In this paper, we implemented the Side Scan Sonar A-KAZE was selected because of its reported outperformance
(SSS) module for the UnderWater Simulator (UWSim) [1], and fast computation time [8].
specifically focusing on sonar image simulation. To obtain the
simulated SSS images, we adopted the Lambertian diffusion 2. R ELATED R ESEARCH
model [2]. The beam profile was modeled using a pair of sine
wave functions. We developed this sonar module in the UWSim Underwater simulators are essential for underwater engi-
which runs on Robot Operating System (ROS) framework. For neers as enabling researchers to overcome insufficient data
validation, we applied feature matching to the obtained sonar that is often encountered during underwater experiments.
images, and examined the inliers and matching performance. Underwater simulators have been previously implemented by
Keywords—Side Scan Sonar, UnderWater Simulator, AUV,
Image simulation
researchers. For example, UWSim [1] works with the open-
source Ubuntu ROS. It uses Open Scene Graph (OSG) which
includes robot dynamics for underwater robots as well as
1. I NTRODUCTION
fluid dynamics for underwater environments. OSG has the
Acquiring real-world data is challenging, especially for ability of displaying an image stream of underwater robots in
underwater robotics engineers. Toward this issue, exploit- real time with high graphic quality. In the study by Heriot-
ing simulators shed light on the data shortage by provid- Watt University OSL [5], authors presented sonar sidescan
ing simulated data and the mean to validate the algorithm. simulator and bathymeter simulator using Matlab. Their work
The simulators are usually equipped with various underwater only focused on Matlab without the ROS interface, whereas
sensors for navigation and perception. We focus on one of our work targets developing a missing sonar module for the
the popular underwater simulators, UWSim, developed by UWSim in the ROS framework.
IRSLab of Jaume-I University of Castelln [1]. The simulator The fundamentals of SSS were handled by Helferty [9],
provides a variety of underwater sensors to fully simulate such as physical properties, data processing, image compo-
the environment, including inertial measurement unit (IMU). sition and enhancement. The simulation of the SSS image
Doppler velocity log (DVL), and cameras. performed by Vanessa creates the SSS image through ray
In this paper, we select Side Scan Sonar (SSS) as an imaging tracing rendering technology [10]. A more detailed SSS image
sensor. Although cameras are commonly used for imaging, the model was introduced in [11]. Later, a simplified SSS image
camera image deteriorates by the high speckle noise of the model was presented by using the Lambertian diffusion model,
underwater. Compared to these limited vision sensors, sonar and has been widely used in sonar image modeling by many
wave transmits well underwater, and sonar is widely used for researchers [2], [4], [5], [12]. In this work, the SSS beam
underwater imaging [3]. In this paper, we choose the SSS profile was approximated to a plane to reduce the computa-
among several sonar types. The SSS which is used in this tional cost from three-dimensional (3D) to two-dimensional
work provides a high resolution of 10 cm × 5 cm with a (2D). The beam waveform of the SSS was computed through
wide sensing range of 70 m [4]. experiments or approximation since the theoretical expression
We developed a SSS module in UWSim [1]. Unlike the is hardly obtained. In this work, we approximated the beam
MALAB based bathymeter simulator [5], the proposed SSS waveform by a pair of sinusoidal functions similar to the
module was developed in C++ language which provided faster work by [13]. In the field of underwater sonar imaging, there
computing performance than MATLAB. This proposed SSS are typical low and high frequency noise of the underwater
module improves UWSim by adding an SSS module. We environment. In this paper, the low frequency noise was
implemented a black-brown-yellow colormap, Rayleigh noise modeled by Rayleigh noise [6], and the high frequency noise
[6] and Speckle noise [7] which were intended to approximate was modeled by Speckle noise [7].
an artificial SSS image to a real image. SSS image matching using a correlation of image intensities
We have tested A-KAZE feature matching [8] to the simu- and using wavelet transform for denoising were proposed by
lated SSS images for validation. Among many visual features, [14]. The state of the art features were compared to match
y x The parameters for the equation and Fig. 1 are summarized
in Table 1. Since the reflection coefficient R is a function of
Side Scan Sonar z the parameter at the point p and the sonar frequency, and the
reflection coefficient R was assumed to be a constant as the
frequency is constant at the sand bottom. In the simulation,
R = 0.5 was used. Then, o and p were calculated from the
o data obtained from the sensor and K was calculated. While it
ɑ is essential to evaluate sonar waveforms, only waveform data
is within the range of possibility to be obtained experimentally
without formulas. Therefore, it is available to simulate by
assuming approximate expressions [13].
N Sonar wave
Ф p 3.2. Side Scan Sonar Module in ROS
θ The data flow between UWSim and ROS illustrates in
r
Fig. 2. The UWSim corresponds with ROS through ROS
Seabed interface. In ROS interface, the data of UWSim converts into
ROS topic which is published into ROS. We developed SSS
Fig. 1. Side scan sonar coordinate
module by modifying the ROS interface. The Lambertian
diffusion model [12] was implemented in the ROS interface.
TABLE 1 From (1), we obtain a grayscale image which is the raw data
S IDE SCAN SONAR PARAMETERS
of SSS images. This image is the SSS image topic in Fig. 2.
Symbol explanation The noise is added to make the synthetic image similar to
the actual image such as Rayleigh noise for low frequency
o Origin point of sensor coordinate
noise [6] and Speckle noise for high frequency [7] which is
p Reflection point of seabed
the physical nature of underwater.
r Vector of points o and p These noises have been used for sidescan simulator by [5].
N Normal vector at p Rayleigh noise was developed by adding Rayleigh distribution
θ Angle between r and N to image intensity, and Speckle noise was exploited by multi-
α Angle between x-axis and r plying uniform noise by image intensity. Each noise formula
is as follows.
Φ Side scan sonar beam profile
R Reflection coefficient I0 = I · (1 + u) + r
I(p) Image intensity of sonar wave u ∼ U (0, γ)
K 1/Imax r ∼ R(0, σ) (2)
The I is image intensity, I 0 is image intensity with noise,
the SSS images [15]. For example, Scale Invariant Feature u is uniform noise U with variance γ and r is Rayleigh
Transform (SIFT) [16], Speeded Up Robust Features (SURF) noise R with variance σ. Therefore, the output image has both
[17], Oriented FAST and Rotated BRIEF (ORB) [18] and A- Rayleigh noise for low frequency noise and Speckle noise for
KAZE [8] were compared. As reported by [8], inlier ratio high frequency.
for the A-KAZE for SSS images overwrote other feature
extractors. Therefore, we also test A-KAZE feature detector
to the simulated sonar images. UWSim ROS
3. S IDE S CAN S ONAR M ODULE D EVELOPMENT
Virtual Camera
3.1. Side Scan Sonar Modeling
Scene
rendering Publish
We adopted the simplified model by [12]. The SSS model
ROS
was approximated using the Lambertian diffusion model, using UWSim Viewer SSS image topic
Interface
the coordinate system shown in the following figure.
The model (1) is a simplified Lambertian diffusion model Sensor_msgs::Image
HUDCamera OpenCV
which is applied to sound waves [12].
Osg::viewer Osg::Widget Image
processing
I(p) = RKΦ cos(θ)
r·N
cos(θ) = (1) Fig. 2. Data flow between UWSim and ROS
krk kNk
black-brown-yellow colormap
1
R
G
0.8 B
RGB intensity
0.6
0.4
0.2
0 (a) View of simulation environment with UWSim
0 0.2 0.4 0.6 0.8 1
grayscale
Fig. 3. The black-brown-yellow colormap for simulated images
Now we introduce our colormap for SSS image simulation.
The colormap was applied in ROS with OpenCV. When the
size of the grayscale image is 0 to 1, we defined the RGB
channel of the colormap as (3).
(
1.25I, (0 ≤ I < 0.8) (b) SSS image sample
R(I) =
1, (0.8 ≤ I ≤ 1) Fig. 4. View of the UWSim and developed SSS module.
(
0.7812I, (0 ≤ I < 0.8)
G(I) = (3)
1.883I − 0.883, (0.8 ≤ I ≤ 1)
SIFT [16], SURF [17], ORB [18] and A-KAZE [8] were
(
0.4975I, (0 ≤ I < 0.8)
B(I) = compared with number of inliers. These feature extractors
−1.99I + 1.99, (0.8 ≤ I ≤ 1) compared the features between two images. Then, Nearest
This colormap is shown in Fig. 3. The RGB channel was Neighbor Distance Ratio (NNDR) matching was performed to
(R, G, B) = (0, 0, 0) when I = 0 so to be converted to black. find the matching points of these features. Finally, the random
On the other hand, the RGB channel was (1, 0, 625, 0.398) sample consensus (RANSAC) was applied to remove outliers
when the grayscale I = 0.8 in order to be mapped to brown. from matching points and left only inliers from each images.
The RGB channel was (1, 1, 0) when grayscale I = 1 because The number of inliers were 8, 7, 4, and, 11 for SIFT [16],
we made it to convert to yellow. In the middle, the colormap SURF [17], ORB [18] and A-KAZE [8] respectively. As a
was constructed by connecting linearly as shown in Fig. 3. result, A-KAZE [8] had the largest number of inliers. Although
we had not tested with parameter optimization, we believe
4. S IMULATION A-KAZE [8] parameter optimization can further improve the
4.1. Test Environment in UWSim inlier ratio.
Using this simulated sonar image, we tested the feature Since the underwater sonar images contain high speckle
matching algorithm for validation. Fig. 4(a) illustrates the sim- noise and have low resolution, many feature matching methods
ulation pipe inspection scenario in underwater environments. may not successful for these sonar images. To tackle this issue,
The underwater pipes were 480m long each, and the four pipes image preprocessing is essential. Since we added Rayleigh
were placed on the sea floor to form a square. In addition, there noise [6] and Speckle noise [7] to the SSS image, it is needed
were four rocks of 1.7 m length on each of the four corners. preprocessing such as image enhancement and noise reduction.
Fig. 4(b) illustrates the developed SSS module. We believe a proper preprocessing would improve the feature
matching performance even for side scanning sonar images.
4.2. Feature Matching Comparison Potential candidates are median filter or bilateral filter. The
Within this simulated environment, we selected A-KAZE median filter is widely selected to noise filtering, and the
for SSS image matching. The Fig. 5(a) shows the corre- bilateral filtering preserves edges of image alleviating the
spondences of images by A-KAZE. The Table 2 summaries noise. Because edges are expect to become feature points,
the matching performance of feature extractors among most the bilateral filtering may provide better feature matching
widely used features in the computer vision area. For instance, performance.
[2] H. Ragheb and E. R. Hancock, “Surface radiance correction for shape
from shading,” Proceedings of the International Conference Pattern
Recognition, vol. 38, no. 10, pp. 1574 – 1595, 2005.
[3] M. F. Fallon, M. Kaess, H. Johannsson, and J. J. Leonard, “Efficient auv
navigation fusing acoustic ranging and side-scan sonar,” in Proceedings
of the IEEE International Conference on Robotics and Automation, May
2011, pp. 2398–2405.
[4] D. Langer and M. Hebert, “Building qualitative elevation maps from side
(a) first image (b) matched image
scan sonar data for autonomous underwater navigation,” in Proceedings
of the IEEE International Conference on Robotics and Automation, Apr
1991, pp. 2478–2483 vol.3.
[5] Y. Pailhas, Y. Petillot, C. Capus, and K. Brown, “Real-time sidescan
simulator and applications,” in Proceedings of the IEEE OCEANS-
Europe Conference and Exhibition, May 2009, pp. 1–6.
[6] A. S. A. Ghani and N. A. M. Isa, “Underwater image quality en-
hancement through rayleigh-stretching and averaging image planes,”
International Journal of Naval Architecture and Ocean Engineering,
(c) A-KAZE image matching vol. 6, no. 4, pp. 840–866, 2014.
[7] S. Banerjee, R. Ray, S. N. Shome, and G. Sanyal, “Noise induced feature
Fig. 5. (a) The first image was obtained from the starting point. (b) The enhancement and object segmentation of forward looking sonar image,”
matched image was extracted from the loop closing point. (c) Image matching Procedia Technology, vol. 14, pp. 125–132, 2014.
pairs from SSS image matching using A-KAZE. [8] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” in
Proceedings of the European Conference on Computer Vision, 2012.
[9] M. Helferty, “The geological interpretation of side-scan sonar,” Reviews
TABLE 2
of Geophysics, vol. 28, pp. 357–380, 1990.
F EATURE MATCHING COMPARISON
[10] V. S. Blake, “Simulation in underwater archaeological prospection,”
Conference of the Remote Sensing society., 1996.
Number of Inliers [11] S. Anstee, “Removal of range-dependent artifacts from sidescan sonar
SIFT 8 imagery,” DTIC Document, Tech. Rep., 2001.
[12] E. Coiras, Y. Petillot, and D. M. Lane, “Multiresolution 3-d recon-
BRISK 7 struction from side-scan sonar images,” IEEE Transactions on Image
ORB 4 Processing, vol. 16, no. 2, pp. 382–390, Feb 2007.
[13] C. de Jong, G. Lachapelle, S. Skone, and I. Elema, “Multibeam sonar
A-KAZE 11 theory of operation,” Delft University Press Delft, the Netherlands, Tech.
Rep., 2002.
[14] N. Neretti, N. Intrator, and Q. Huynh, “Target detection in side-scan
sonar images: expert fusion reduces false alarms,” 2002.
5. C ONCLUSION [15] X.-F. Ye, P. Li, J.-G. Zhang, J. Shi, and S.-X. Guo, “A feature-matching
method for side-scan sonar images based on nonlinear scale space,”
We developed the SSS module for UWSim [1]. As a Journal of Marine Science and Technology, vol. 21, no. 1, pp. 38–47,
preliminary step to developing a SSS simulator operated on 2016.
UWSim [1], we acquired a simulated SSS image using the [16] D. Lowe, “Distinctive image features from scale-invariant keypoints,”
International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110,
Lambertian diffusion model [12]. The simplified SSS image 2004.
equation was used, and the beam profile was estimated based [17] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust
on the sinusoidal waveform. For validation, we used A-KAZE features (SURF),” Computer Vision and Image Understanding, vol. 110,
no. 3, pp. 346–359, 2008.
[8] feature matching to the simulated images. We implemented [18] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient
a black-brown-yellow colormap and applied Rayleigh noise [6] alternative to sift or surf,” in Proceedings of the IEEE International
and Speckle noise [7] to reflect the physical characteristic of Conference on Computer Vision, Nov 2011, pp. 2564–2571.
underwater noise.
For the future work, we plan to use A-KAZE [8] feature
matching for SSS bundle adjustment (BA). Robot odometry
and SSS images will generate for pose-graph simultaneous
localization and mapping (SLAM). The output of SSS BA
will be then merged to SLAM framework for loop closing
constraints of pose-graph SLAM.
ACKNOWLEDGEMENTS
This work is supported by LIG Nex1. Joowan Kim was
financially supported by Korea Ministry of Land, Infrastruc-
ture and Transport (MOLIT) via ‘U-City Master and Doctor
Course Grant Program’.
R EFERENCES
[1] M. Prats, J. Prez, J. J. Fernndez, and P. J. Sanz, “An open source tool
for simulation and supervision of underwater intervention missions,” in
Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems, Oct 2012, pp. 2577–2582.