0% found this document useful (0 votes)
16 views11 pages

Usp

The document covers the fundamentals of ocean acoustics, including the dependence of sound velocity on temperature, salinity, and depth, as well as typical sound velocity profiles and their implications for sound propagation. It discusses transmission loss phenomena, absorption models, and the reflection and transmission of sound waves at interfaces. Additionally, it details sonar performance prediction, sound propagation modeling techniques, sonar antenna design, and the processing chains for sonar systems.

Uploaded by

Vishnu Veeranjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views11 pages

Usp

The document covers the fundamentals of ocean acoustics, including the dependence of sound velocity on temperature, salinity, and depth, as well as typical sound velocity profiles and their implications for sound propagation. It discusses transmission loss phenomena, absorption models, and the reflection and transmission of sound waves at interfaces. Additionally, it details sonar performance prediction, sound propagation modeling techniques, sonar antenna design, and the processing chains for sonar systems.

Uploaded by

Vishnu Veeranjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter 1: Fundamentals of Ocean Acoustics

1) How does the sound velocity depend on the different physical parameters?

As given in USP1:

c=1449.2+4.6T−0.055T2+0.00029T3+(1.34−0.01T)(S−35)+0.016zc = 1449.2 + 4.6T - 0.055T^2 +


0.00029T^3 + (1.34 - 0.01T)(S - 35) +
0.016zc=1449.2+4.6T−0.055T2+0.00029T3+(1.34−0.01T)(S−35)+0.016z

• TTT: temperature [°C] → strongest effect; increases ccc

• SSS: salinity [ppt] → moderate increase

• zzz: depth [m] (via pressure) → linear increase

2) Please specify typical vertical sound velocity profiles.

From USP1:

• Surface channel: Velocity increases with depth to a certain level, then decreases.

• Deep channel (SOFAR): Minimum velocity at depth zmz_mzm, increases above (due to TTT)
and below (due to pressure).

• Antiwaveguide: Monotonic decrease (hot surface layers).

• Two-axis channel: Formed by intrusion of warmer/saltier water (e.g., Mediterranean


outflow).

3) Sketch qualitatively sound ray diagrams for typical sound velocity profiles.

Descriptions (USP1, pg. 10–20):

• SOFAR Channel: Rays refract back toward minimum, forming trapped paths.

• Surface Duct: Multiple surface reflections.

• Antiwaveguide: Rays bend downward into shadow zones.

• Two-axis Channel: Double trapping regions.

(You should sketch ray paths with curvature toward regions of minimum sound speed.)

4) Which different phenomena determine the transmission loss of sound?

From USP1, pg. 21–22:

• Spreading Loss:

o Spherical: I∝1R2I \propto \frac{1}{R^2}I∝R21


o Cylindrical: I∝1rdI \propto \frac{1}{rd}I∝rd1

• Absorption/Attenuation:

o Due to viscosity, relaxation effects

o Depends on fff, TTT, SSS, pHpHpH, zzz

• Boundary Interactions: Surface and bottom reflections cause loss.

• Scattering: From inhomogeneities.

5) Please report on the different absorption models considered.

i) Thorp Formula: (100 Hz–3 kHz)

α=0.11f21+f2+44f24100+f2[dB/km]\alpha = \frac{0.11f^2}{1 + f^2} + \frac{44f^2}{4100 + f^2} \quad


\text{[dB/km]}α=1+f20.11f2+4100+f244f2[dB/km]

ii) Schulkin & Marsh: (3 kHz–0.5 MHz)

α=8.686×103[Af12f2f12+f2+Bf2][dB/km]\alpha = 8.686 \times 10^3 \left[ A \frac{f_1^2 f^2}{f_1^2 +


f^2} + B f^2 \right] \quad \text{[dB/km]}α=8.686×103[Af12+f2f12f2+Bf2][dB/km]

iii) Francois & Garrison: (100 Hz–1 MHz)

α=A1P1f2f12+f2+A2P2f2f22+f2+A3P3f2\alpha = A_1 P_1 \frac{f^2}{f_1^2 + f^2} + A_2 P_2


\frac{f^2}{f_2^2 + f^2} + A_3 P_3 f^2α=A1P1f12+f2f2+A2P2f22+f2f2+A3P3f2

Each term accounts for boric acid, magnesium sulfate, and pure water contributions.

6) Describe the reflection and transmission of sound waves at interfaces.

From USP1:

R=mcos⁡ϕ1−ncos⁡ϕ2mcos⁡ϕ1+ncos⁡ϕ2,T=2mcos⁡ϕ1mcos⁡ϕ1+ncos⁡ϕ2R = \frac{m\cos\phi_1
- n\cos\phi_2}{m\cos\phi_1 + n\cos\phi_2}, \quad T = \frac{2m\cos\phi_1}{m\cos\phi_1 +
n\cos\phi_2}R=mcosϕ1+ncosϕ2mcosϕ1−ncosϕ2,T=mcosϕ1+ncosϕ22mcosϕ1

• m=ρ2ρ1m = \frac{\rho_2}{\rho_1}m=ρ1ρ2

• n=c2c1n = \frac{c_2}{c_1}n=c1c2

• ϕ1\phi_1ϕ1: angle of incidence

• ϕ2\phi_2ϕ2: transmission angle via Snell’s Law

These coefficients determine how much energy is reflected or transmitted across media boundaries
(e.g., water-air, water-sediment).

7) Explain the modeling of sound scattering at the surface and bottom as well as within the water
volume.
From USP1, scattering is due to:

• Surface: Wind/wave roughness (statistical roughness models)

• Bottom: Sediment irregularities (bottom type dependent)

• Volume: Particles, turbulence, bubbles

Scattering affects coherence, intensity and arrival structure of the signal.

8) Specify qualitatively the spectral distribution of the underwater ambient noise and indicate the
different contributing components.

Not directly in USP1, but known:

• < 100 Hz: Shipping noise

• 100 Hz – 1 kHz: Wind/wave noise

• > 10 kHz: Thermal noise dominates

• Others: Rain, biological sources (e.g., snapping shrimp)

Spectral density typically drops 6 dB/octave in mid-frequency bands.

9) What is the goal of sonar performance prediction?

To estimate detection capability under given environmental and system conditions:

• Evaluate SNR

• Assess detection ranges

• Optimize sonar settings

• Predict coverage and blind zones

10) How is the Sonar Equation defined?

SNR=SL−TL+TS−(NL−DI)\text{SNR} = \text{SL} - \text{TL} + \text{TS} - (\text{NL} -


\text{DI})SNR=SL−TL+TS−(NL−DI)

11) Please specify the typical parameters required for evaluating the sonar equation.

• SL: Source Level [dB re 1 μPa @ 1m]

• TL: Transmission Loss [dB]

• TS: Target Strength [dB]

• NL: Noise Level [dB]

• DI: Directivity Index [dB]


• Others: Frequency, range, beamwidth, ambient noise spectrum

Chapter 2: Sound Propagation Modeling

1) Describe qualitatively how the wave equation can be derived.

From USP2:

1. Use Continuity Equation (mass conservation)

2. Apply Euler’s Equation (momentum conservation)

3. Use Adiabatic Equation of State

Assuming linear acoustics, small perturbations and no viscosity:

∂2p∂t2=c2∇2p\frac{\partial^2 p}{\partial t^2} = c^2 \nabla^2 p∂t2∂2p=c2∇2p

2) Which different techniques have been considered to solve the wave equation?

From USP2:

• Ray Tracing (RT)

• Normal Mode (NM)

• Parabolic Equation (PE)

• Finite Element (FE)

• Finite Difference (FD)

• Fast Field Program (FFP)

3) When are the different techniques considered applicable?

• RT: High frequency, simple media

• NM: Range-independent, shallow water

• PE: Complex, range-dependent, low/moderate frequencies

• FFP: Efficient for axisymmetric environments

• FE/FD: General numerical solutions, any geometry

4) How can sound rays be constructed if the image source or ray tracing approach is used?

Image Source:

• Reflect source at boundaries to create virtual sources


• Superpose spherical waves from each image source

Ray Tracing:

• Solve ray path using Snell's law and sound speed gradients

• Track reflection and refraction events

5) Please explain qualitatively how the normal mode solution can be derived.

From USP2:

• Assume harmonic time dependence: p(r,z,t)=P(r,z)ejωtp(r, z, t) = P(r, z)e^{j\omega


t}p(r,z,t)=P(r,z)ejωt

• Separate variables P(r,z)=R(r)Z(z)P(r, z) = R(r)Z(z)P(r,z)=R(r)Z(z)

• Solve Helmholtz equation in each domain

• Boundary conditions yield discrete modes

• Sum all modes to reconstruct full pressure field

Chapter 3: Sonar Antenna Design

1) Describe the calculation of pressure fields generated by continuous or discrete apertures.

From USP3:

Discrete (N elements):

p(t,r)=∑n=1NQnej(ωt−krn)rnp(t, \mathbf{r}) = \sum_{n=1}^N Q_n \frac{e^{j(\omega t - k


r_n)}}{r_n}p(t,r)=n=1∑NQnrnej(ωt−krn)

Continuous:

p(t,r)=∫Sqs(r′)ej(ωt−k∣r−r′∣)∣r−r′∣ dSp(t, \mathbf{r}) = \int_S q_s(\mathbf{r}') \frac{e^{j(\omega t - k


|\mathbf{r} - \mathbf{r}'|)}}{|\mathbf{r} - \mathbf{r}'|} \, dSp(t,r)=∫Sqs(r′)∣r−r′∣ej(ωt−k∣r−r′∣)dS

2) What tells us the beampattern of an antenna and how can it be derived?

Beam pattern B(ϕ,θ)B(\phi, \theta)B(ϕ,θ): spatial distribution of radiated energy

Derived as:

• Discrete: weighted phasor sum of elements

• Continuous: Fourier transform of aperture function


3) Report on the characteristic features and differences of beampatterns for circular, rectangular
and line apertures.

• Circular: Symmetrical main lobe, Bessel side lobes

• Rectangular: Rectangular symmetry, sinc-pattern

• Line: 1D control; beam narrow in one plane only

4) How changes the beampattern of a line array if the element spacing and/or the number of
elements is varied?

• More elements: narrower main beam, more directivity

• Larger spacing: grating lobes if d>λ2d > \frac{\lambda}{2}d>2λ

5) Explain how amplitude shading affects the beampattern of a line array.

• Shading (tapering): reduces side lobes

• Common windows: Hamming, Hann, Blackman

• Tradeoff: wider main lobe but cleaner beam

6) Describe how phase shading can be used either for electronic steering or for a broadening of the
main beam.

• Electronic steering: Apply phase gradient across elements

• Beam broadening: Vary phase non-linearly to spread beam

7) Specify the performance measures introduced for transmitter and receiver arrays

• Directivity Index (DI)

• Beamwidth

• Sidelobe Level

• Gain

• Effective Aperture

• Array Factor
1) Specify the transmitter and receiver processing chain of a sonar system

➤ Transmitter Processing Chain:

1. Waveform Generator

o Types:

▪ CW (Continuous Wave): sinusoidal pulses

▪ FM (Frequency Modulated): LFM, HFM, DFM

2. Array Shading

o Amplitude shading: reduces side-lobes

o Complex shading: amplitude + phase control (beam steering/shaping)

3. Power Amplifier

o Switching amps: high source levels

o Linear amps: SAS applications (enhanced coherence)

4. Transmitter Array

o Emits the waveform into the medium

➤ Receiver Processing Chain:

1. Receiver Array

o Captures the incoming echo signals

2. Signal Conditioning

o Preamplifier + Band-Pass Filter

o AGC / Time-Variable Gain (TVG)

o Quadrature Demodulation (analog or digital)

o Anti-Aliasing Filter + A/D conversion (16–24 bits)

3. Signal Processing

o Matched Filtering / Pulse Compression

o Beamforming (near/far-field, time/frequency domain)

o Synthetic Aperture Sonar, MVDR, MUSIC, ESPRIT

4. Information Processing

o Image Formation and Fusion

o Target Detection/Classification, ATR


2) What is the purpose of a matched filter and how can it be derived?

Purpose: Maximize Signal-to-Noise Ratio (SNR) for detecting known signals in noise.

Derivation:

Given:

• Received signal:

x(t)=e(t)+n(t)=as~(t−τ)+n~(t)x(t) = e(t) + n(t) = a \tilde{s}(t - \tau) +


\tilde{n}(t)x(t)=e(t)+n(t)=as~(t−τ)+n~(t)

Output of linear filter h(t)h(t)h(t):

y(t)=∫h(t′)x(t−t′)dt′y(t) = \int h(t') x(t - t') dt'y(t)=∫h(t′)x(t−t′)dt′

SNR at t=τt = \taut=τ is:

γ(h)=∣∫h(t)s~(t−τ)dt∣2∫h(t)n~(t)dt\gamma(h) = \frac{|\int h(t) \tilde{s}(t - \tau) dt|^2}{\int h(t)


\tilde{n}(t) dt}γ(h)=∫h(t)n~(t)dt∣∫h(t)s~(t−τ)dt∣2

Using Parseval and Cauchy-Schwarz:

h^(t)=c⋅s~(−t)\hat{h}(t) = c \cdot \tilde{s}(-t)h^(t)=c⋅s~(−t)

Thus the matched filter is the time-reversed complex conjugate of the transmitted signal.

Maximum SNR:

γopt=a2∥s~∥22N0\gamma_{\text{opt}} = \frac{a^2 \|\tilde{s}\|^2}{2N_0}γopt=2N0a2∥s~∥2

3) Explain the notion analytical signal and complex envelope.

• Analytical signal:

s+(t)=s(t)+js^(t)s_+(t) = s(t) + j \hat{s}(t)s+(t)=s(t)+js^(t)

where s^(t)\hat{s}(t)s^(t) is the Hilbert transform of s(t)s(t)s(t)

• Complex envelope:
For band-pass signal s~(t)\tilde{s}(t)s~(t):

s~(t)=ℜ{s(t)ejωct}\tilde{s}(t) = \Re\{s(t) e^{j\omega_c t}\}s~(t)=ℜ{s(t)ejωct}

where s(t)=A(t)ejϕ(t)s(t) = A(t) e^{j\phi(t)}s(t)=A(t)ejϕ(t) is the complex envelope

4) How can the complex envelope be obtained?

By quadrature demodulation:

1. Multiply s~(t)\tilde{s}(t)s~(t) with 2cos⁡(ωct)2\cos(\omega_c t)2cos(ωct) → In-phase

2. Multiply s~(t)\tilde{s}(t)s~(t) with −2sin⁡(ωct)-2\sin(\omega_c t)−2sin(ωct) → Quadrature

3. Apply Low-Pass Filter (LP) to both


4. Combine:

s(t)=sI(t)+jsQ(t)s(t) = s_I(t) + j s_Q(t)s(t)=sI(t)+jsQ(t)

5) Why is the quadrature demodulation useful?

• Extracts the complex envelope from a real band-pass signal

• Enables:

o Lower sampling rates (baseband processing)

o Separation of amplitude and phase modulation

o Efficient digital signal processing

o Matched filtering in complex domain

6) How can the quadrature demodulation be implemented?

Implementations:

1. Analog Demodulation: mixing with sin/cos + LPF

2. Digital Demodulation: sample then mix in digital domain

3. Hilbert Transform Method:

o Use HT to create analytical signal

o Multiply with e−jωcte^{-j\omega_c t}e−jωct

4. Band-Pass Sampling + Real Mixing + Interpolation:

o Downsampled digital baseband signal using cos/sin modulation and filters

7) Does the quadrature demodulation have an impact on the SNR?

No.

• Before demodulation:

Esσn2=12π∫∣S(ω)∣2dω\frac{E_s}{\sigma_n^2} = \frac{1}{2\pi} \int |S(\omega)|^2 d\omegaσn2Es=2π1


∫∣S(ω)∣2dω

• After quadrature demodulation:

Esσn2=1π∫∣S(ω)∣2dω\frac{E_s}{\sigma_n^2} = \frac{1}{\pi} \int |S(\omega)|^2 d\omegaσn2Es=π1


∫∣S(ω)∣2dω

→ Numerator and denominator both double → SNR unchanged.


8) Specify the matched filter required after quadrature demodulation.

Let the received complex envelope be:

x(t)=s(t)+n(t)x(t) = s(t) + n(t)x(t)=s(t)+n(t)

Then optimal matched filter:

h^(t)=c⋅s∗(−t)\hat{h}(t) = c \cdot s^*(-t)h^(t)=c⋅s∗(−t)

Max SNR:

γopt=a2∥s∥22N0\gamma_{\text{opt}} = \frac{a^2 \|s\|^2}{2N_0}γopt=2N0a2∥s∥2

Same as in real domain.

9) Report on the various definitions of range resolution.

1. 3 dB Width:

Δt=t+−t−wherep(t±)=12p(0)\Delta t = t_+ - t_- \quad \text{where} \quad p(t_±) = \frac{1}{2}p(0)Δt=t+


−t−wherep(t±)=21p(0)

2. Zero-Crossing Width:

p(t)=0for ∣t∣≥T⇒Δr=cTp(t) = 0 \quad \text{for } |t| \geq T \quad \Rightarrow \Delta r =


cTp(t)=0for ∣t∣≥T⇒Δr=cT

3. Energy-Equivalent Pulse Width:

Δ=∥p∥2p(0)\Delta = \frac{\|p\|^2}{p(0)}Δ=p(0)∥p∥2

4. Separability Criterion (Two-point echoes become distinguishable when):

τ>Δt\tau > \Delta tτ>Δt

10) Why has the Doppler effect to be treated differently for EM and sound waves?

Because of the reference medium:

• Sound waves require material medium (water). Speed is c relative to medium.

• EM waves (Radar): propagate in vacuum. Speed is constant for all observers.

→ Hence, Doppler formulas differ.

11) Specify the impact of the Doppler effect on the received signal.

If:

• fSf_SfS: source frequency

• vSv_SvS, vTv_TvT: platform and target velocities


• αS\alpha_SαS, αT\alpha_TαT: angles of motion

Then the received frequency is:

fR=fS⋅c−vTcos⁡αTc+vScos⁡αSf_R = f_S \cdot \frac{c - v_T \cos \alpha_T}{c + v_S \cos \alpha_S}fR=fS
⋅c+vScosαSc−vTcosαT

For small velocities (v≪cv \ll cv≪c):

fR≈fS(1−vrc)with vr=vT−vSf_R \approx f_S \left(1 - \frac{v_r}{c}\right) \quad \text{with } v_r = v_T -
v_SfR≈fS(1−cvr)with vr=vT−vS

12) Which simplifying assumptions are usually exploited for Doppler modeling?

• Low speed: v≪cv \ll cv≪c

• Linear motion: constant radial velocity

• Narrowband assumption: allows modeling frequency shift as phase modulation

13) How can the contradiction between high energy and high range resolution be
circumvented?

By using pulse compression:

• Long-duration pulse → high energy

• Modulation (e.g. LFM) + matched filter → compressed in time → high resolution

14) Explain the principle of pulse compression.

• Transmit long pulse with frequency modulation (e.g., LFM)

• Matched filter compresses it in time

• Output is short pulse (point target response) with high peak

15) What is the relationship between pulse compression and matched filtering?

• Pulse compression is achieved via matched filtering

• The matched filter maximizes SNR and compresses the modulated pulse to a narrow peak

• Response:

p(t)=s(t)∗s∗(−t)p(t) = s(t) \ast s^*(-t)p(t)=s(t)∗s∗(−t)

You might also like