0% found this document useful (0 votes)
161 views64 pages

MM PDF Net

The document summarizes key concepts in analog and digital video. It describes component, composite, and S-video signals, explaining that component video provides the best color reproduction without crosstalk between channels but requires more bandwidth. It also discusses the fundamentals of visual representation for video, including frame rates, aspect ratios, luminance, and temporal aspects like flicker. Finally, it covers analog video representation and interlaced scanning techniques.

Uploaded by

Hassan Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
161 views64 pages

MM PDF Net

The document summarizes key concepts in analog and digital video. It describes component, composite, and S-video signals, explaining that component video provides the best color reproduction without crosstalk between channels but requires more bandwidth. It also discusses the fundamentals of visual representation for video, including frame rates, aspect ratios, luminance, and temporal aspects like flicker. Finally, it covers analog video representation and interlaced scanning techniques.

Uploaded by

Hassan Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Chapter 5

Fundamental Concepts in Video

5.1 Types of Video Signals


5.2 Analog Video
5.3 Digital Video
Fundamentals of Multimedia, Chapter 5

5.1 Types of Video Signals


Component video
• Component video: Higher-end video systems make use of
three separate video signals for the red, green, and blue image
planes. Each color channel is sent as a separate video signal.

(a) Most computer systems use Component Video, with separate signals
for R, G, and B signals.

(b) For any color separation scheme, Component Video gives the best
color reproduction since there is no “crosstalk” between the three
channels.

(c) This is not the case for S-Video or Composite Video, discussed next.
Component video, however, requires more bandwidth and good
synchronization of the three components.

2
Fundamentals of Multimedia, Chapter 5

Component Video

3
Fundamentals of Multimedia, Chapter 5

Composite Video
• Composite video: color (“chrominance”) and intensity
(“luminance”) signals are mixed into a single carrier wave.
a) Chrominance is a composition of two color components (I and Q, or U and V).

b) In NTSC TV, e.g., I and Q are combined into a chroma signal, and a color
subcarrier is then employed to put the chroma signal at the high-frequency
end of the signal shared with the luminance signal.

c) The chrominance and luminance components can be separated at the receiver


end and then the two color components can be further recovered.

d) When connecting to TVs or VCRs, Composite Video uses only one wire and
video color signals are mixed, not sent separately. The audio and sync signals
are additions to this one signal.
• Since color and intensity are wrapped into the same signal,
some interference between the luminance and chrominance
signals is inevitable.
4
Composite Video
• The most common form of connecting external
devices, putting all the video information into one
signal.
• BNC接头是一种用于同轴电缆的连接器,全称是
Bayonet Nut Connector(刺刀螺母连接器)
• RCA接头(Radio Corporation of America)

5
Fundamentals of Multimedia, Chapter 5

S-Video
• S-Video: as a compromise, (separated video, or Super-video, e.g., in S-
VHS) uses two wires, one for luminance and another for a composite
chrominance signal.

• As a result, there is less crosstalk between the color information and the
crucial gray-scale information.

• The reason for placing luminance into its own part of the signal is that black-
and-white information is most crucial for visual perception.

– In fact, humans are able to differentiate spatial resolution in grayscale images


with a much higher acuity than for the color part of color images.

– As a result, we can send less accurate color information than must be sent for
intensity information — we can only see fairly large blobs of color, so it makes
sense to send less color detail.

6
Fundamentals of Multimedia, Chapter 5

7
Fundamentals of Multimedia, Chapter 5

Visual Representation
In order to accurately convey both spatial and
temporal aspects of a scene, the following
properties are considered
1. Vertical Details and Viewing Distance
• The geometry of a television image is based on the ratio
of the picture width W to the picture height H (W/H),
called the aspect ratio.
– Conventional aspect ratio is 4:3.
• The angular field of view is determined by the viewing
distance, D, and is calculated as D/H.

8
Fundamentals of Multimedia, Chapter 5

Visual Representation
2. Horizontal Detail and Picture Width
Can be determined from the aspect ratio
3. Total detail content of a picture
Since not all lines (horizontal and vertical) are visible to the
observer, additional information can be transmitted through
them.
4. Depth perception
Depth is a result of composing a picture by each eye (from
different angles)
In a flat TV picture
• Perspective appearance of the subject matter
• Choice of focal length of the camera lens and changes in depth
focus

9
Fundamentals of Multimedia, Chapter 5

Visual Representation
5. Luminance
– RGB can be converted to a luminance (brightness signal)
and two color difference signals (chrominance) for TV
signal transmission
6. Temporal Aspects of Illumination
– A discrete sequence of still images can be perceived as a
continuous sequence.
• The impression of motion is generated by a rapid succession of
barely differing still pictures (frames).
– Rate must be high enough to ensure smooth transition.
– Rate must be high enough so that the continuity of perception is not
disrupted by the dark intervals between pictures
• The light is cut off, briefly, between these frames.

10
Fundamentals of Multimedia, Chapter 5

Visual Representation
7. Continuity of Motion
Continuity is perceived with at least 15 frames per second.
• To make motion appear smooth in a recorded film (not
synthetically generated), a rate of 30 frames per second is needed.
– Films recorded with 24 frames per second look strange when large
objects close to the viewer move quickly.
• NTSC (National Television Systems Committee) Standard
– Original: 30 frames/second
– Currently: 29.97 frames/second
• PAL (Phase Alternating Line) Standard
– 25 frames per second

11
Fundamentals of Multimedia, Chapter 5

Visual Representation
8. Flicker
If the refresh rate is low, a periodic fluctuation of the
perceived brightness can result.
• Minimum to avoid flicker is 50 Hz.
• Technical measures in movies and TV have allowed
lower refresh rates.

The Refresh rate is the number of times a display's image is


repainted or refreshed per second. A refresh rate of 75 Hz
means the image is refreshed 75 times in one second.

12
Fundamentals of Multimedia, Chapter 5

5.2 Analog Video


• An analog signal f(t) samples a time-varying image. So-called
“progressive” scanning traces through a complete picture (a frame)
row-wise for each time interval.

• In TV, and in some monitors and multimedia standards as well,


another system, called “interlaced” scanning is used:

a) The odd-numbered lines are traced first, and then the even-numbered
lines are traced. This results in “odd” and “even” fields — two fields
make up one frame.

b) In fact, the odd lines (starting from 1) end up at the middle of a line
at the end of the odd field, and the even scan starts at a half-way point.

13
Fundamentals of Multimedia, Chapter 5

Fig. 5.1: Interlaced raster scan

c) Figure 5.1 shows the scheme used. First the solid (odd) lines are traced, P to Q, then
R to S, etc., ending at T; then the even field starts at U and ends at V.

d) The jump from Q to R, etc. in Figure 5.1 is called the horizontal retrace, during
which the electronic beam in the CRT is blanked. The jump from T to U or V to P
is called the vertical retrace.

14
Interlacing – Progressive Scan

15
Fundamentals of Multimedia, Chapter 5

• Because of interlacing, the odd and even lines


are displaced in time from each other —
generally not noticeable except when very fast
action is taking place on screen, when blurring
may occur.

• For example, in the video in Fig. 5.2, the


moving helicopter is blurred more than is the
still background.

16
Fundamentals of Multimedia, Chapter 5

(a)

(b) (c) (d)

Fig. 5.2: Interlaced scan produces two fields for each frame. (a) The
video frame, (b) Field 1, (c) Field 2, (d) Difference of Fields
17
Fundamentals of Multimedia, Chapter 5

• Since it is sometimes necessary to change the frame rate,


resize, or even produce stills from an interlaced source
video, various schemes are used to “de-interlace” it.

a) The simplest de-interlacing method consists of discarding one


field and duplicating the scan lines of the other field. The
information in one field is lost completely using this simple
technique.

b) Other more complicated methods that retain information from


both fields are also possible.

• Analog video use a small voltage offset from zero to indicate


“black”, and another value such as zero to indicate the start
of a line.

18
Fundamentals of Multimedia, Chapter 5

Fig. 5.3 Electronic signal for one NTSC scan line.


19
Fundamentals of Multimedia, Chapter 5

NTSC Video
• NTSC (National Television System Committee) TV standard is
mostly used in North America and Japan. It uses the familiar 4:3
aspect ratio (i.e., the ratio of picture width to its height) and uses
525 scan lines per frame at 30 frames per second (fps).

a) NTSC follows the interlaced scanning system, and each frame is


divided into two fields, with 262.5 lines/field.

b) Thus the horizontal sweep frequency is 525×29.97 ≈ 15, 734 lines/sec,


so that each line is swept out in 1/15.734 × 103 sec ≈ 63.6μsec.

c) Since the horizontal retrace takes 10.9 μsec, this leaves 52.7 μsec for
the active line signal during which image data is displayed (see
Fig.5.3).

20
Fundamentals of Multimedia, Chapter 5

• Fig. 5.4 shows the effect of “vertical retrace & sync” and
“horizontal retrace & sync” on the NTSC video raster.

Fig. 5.4: Video raster, including retrace and sync data


21
Fundamentals of Multimedia, Chapter 5

a) Vertical retrace takes place during 20 lines reserved


for control information at the beginning of each
field. Hence, the number of active video lines per
frame is only 485.

b) Similarly, almost 1/6 of the raster at the left side is


blanked for horizontal retrace and sync. The non-
blanking pixels are called active pixels.

c) Since the horizontal retrace takes 10.9 μsec, this


leaves 52.7 μsec for the active line signal during
which image data is displayed (see Fig.5.3).

22
Fundamentals of Multimedia, Chapter 5

• NTSC video is an analog signal with no fixed horizontal resolution.


Therefore one must decide how many times to sample the signal for display:
each sample corresponds to one pixel output.

• A “pixel clock” is used to divide each horizontal line of video into samples.
The higher the frequency of the pixel clock, the more samples per line there
are.

• Different video formats provide different numbers of samples per line, as


listed in Table 5.1.
Table 5.1: Samples per line for various video formats

Format Samples per line


VHS 240
S-VHS 400-425
Betamax 500
Standard 8 m 300
Hi-8 mm 425

23
Fundamentals of Multimedia, Chapter 5

Color Model and Modulation of NTSC


• NTSC uses the YIQ color model, and the technique of quadrature
modulation (正交调幅) is employed to combine (the spectrally overlapped
part of) I (in-phase) and Q (quadrature) signals into a single chroma signal
C:
C = I cos(Fsct) + Qsin(Fsct) (5.1)

• This modulated chroma signal is also known as the color subcarrier, whose
magnitude is I  Q , and phase is tan−1(Q/I). The frequency of C is Fsc
2 2

≈ 3.58 MHz.

• The NTSC composite signal is a further composition of the luminance signal


Y and the chroma signal as defined below:

composite = Y +C = Y +Icos(Fsct) + Qsin(Fsct) (5.2)

24
Fundamentals of Multimedia, Chapter 5

• Fig. 5.5: NTSC assigns a bandwidth of 4.2 MHz to Y, and


only 1.6 MHz to I and 0.6 MHz to Q due to human
insensitivity to color details (high frequency color changes).

Fig. 5.5: Interleaving Y and C signals in the NTSC spectrum.

25
Fundamentals of Multimedia, Chapter 5

Decoding NTSC Signals


• The first step in decoding the composite signal at the
receiver side is the separation of Y and C.

• After the separation of Y using a low-pass filter, the


chroma signal C can be demodulated to extract the
components I and Q separately. To extract I:

1. Multiply the signal C by 2cos(Fsct), i.e.,


C  2 cos( Fsct )  I  2 cos 2 ( Fsct )  Q  2sin( Fsct ) cos( Fsct )
 I  (1  cos(2 Fsct ))  Q  2sin( Fsct ) cos( Fsct )
 I  I  cos(2 Fsct )  Q  sin(2 Fsct )

26
Fundamentals of Multimedia, Chapter 5

2. Apply a low-pass filter to obtain I and


discard the two higher frequency (2Fsc)
terms.

• Similarly, Q can be extracted by first


multiplying C by 2sin(Fsct) and then low-pass
filtering.

27
Fundamentals of Multimedia, Chapter 5

• The NTSC bandwidth of 6 MHz is tight. Its audio subcarrier


frequency is 4.5 MHz. The Picture carrier is at 1.25 MHz,
which places the center of the audio band at 1.25+4.5 = 5.75
MHz in the channel (Fig. 5.5). But notice that the color is
placed at 1.25+3.58 = 4.83 MHz.

• So the audio is a bit too close to the color subcarrier — a cause


for potential interference between the audio and color signals.
It was largely due to this reason that the NTSC color TV
actually slowed down its frame rate to 30×1, 000/1, 001 ≈
29.97 fps.

• As a result, the adopted NTSC color subcarrier frequency is


slightly lowered to
fsc = 30 × 1, 000/1, 001 × 525 × 227.5 ≈ 3.579545 MHz,
where 227.5 is the number of color samples per scan line
inNTSC broadcast TV.
28
Fundamentals of Multimedia, Chapter 5

PAL Video
• PAL (Phase Alternating Line) is a TV standard widely used in Western
Europe, China, India, and many other parts of the world.

• PAL uses 625 scan lines per frame, at 25 frames/second, with a 4:3 aspect
ratio and interlaced fields.
(a) PAL uses the YUV color model. It uses an 8 MHz channel and
allocates a bandwidth of 5.5 MHz to Y, and 1.8 MHz each to U and V.
The color subcarrier frequency is fsc ≈ 4.43 MHz.

(b) In order to improve picture quality, chroma signals have alternate signs
(e.g., +U and -U) in successive scan lines, hence the name “Phase
Alternating Line”.

(c) This facilitates the use of a (line rate) comb filter at the receiver — the
signals in consecutive lines are averaged so as to cancel the chroma
signals (that always carry opposite signs) for separating Y and C and
obtaining high quality Y signals.
29
Fundamentals of Multimedia, Chapter 5

SECAM Video
• SECAM stands for Système Electronique Couleur Avec Mémoire, the
third major broadcast TV standard.

• SECAM also uses 625 scan lines per frame, at 25 frames per second,
with a 4:3 aspect ratio and interlaced fields.

• SECAM and PAL are very similar. They differ slightly in their color
coding scheme:

(a) In SECAM, U and V signals are modulated using separate color


subcarriers at 4.25 MHz and 4.41 MHz respectively.

(b) They are sent in alternate lines, i.e., only one of the U or V signals will
be sent on each scan line.

30
Fundamentals of Multimedia, Chapter 5

• Table 5.2 gives a comparison of the three major analog


broadcast TV systems.

Table 5.2: Comparison of Analog Broadcast TV Systems


Total Bandwidth
Frame # of
Channel Allocation (MHz)
TV System Rate Scan
Width
(fps) Lines Y I or U Q or V
(MHz)
NTSC 29.97 525 6.0 4.2 1.6 0.6
PAL 25 625 8.0 5.5 1.8 1.8
SECAM 25 625 8.0 6.0 2.0 2.0

31
World TV Standards

NTSC
PAL
SECAM
PAL/SECAM
Unknown
Fundamentals of Multimedia, Chapter 5

5.3 Digital Video


• The advantages of digital representation for video are many.
For example:

(a) Video can be stored on digital devices or in memory, ready to be


processed (noise removal, cut and paste, etc.), and integrated to
various multimedia applications;

(b) Direct access is possible, which makes nonlinear video editing


achievable as a simple, rather than a complex, task;

(c) Repeated recording does not degrade image quality;

(d) Ease of encryption and better tolerance to channel noise.

33
Fundamentals of Multimedia, Chapter 5

Chroma Subsampling
• Since humans see color with much less spatial resolution
than they see black and white, it makes sense to
“decimate” the chrominance signal.

• Interesting (but not necessarily informative!) names


have arisen to label the different schemes used.
• To begin with, numbers are given stating how many
pixel values, per four original pixels, are actually sent:

(a) The chroma subsampling scheme “4:4:4” indicates that no


chroma subsampling is used: each pixel’s Y, Cb and Cr
values are transmitted, 4 for each of Y, Cb, Cr.

34
Fundamentals of Multimedia, Chapter 5

(b) The scheme “4:2:2” indicates horizontal subsampling of


the Cb, Cr signals by a factor of 2. That is, of four pixels
horizontally labelled as 0 to 3, all four Ys are sent, and
every two Cb’s and two Cr’s are sent, as (Cb0, Y0)(Cr0,
Y1)(Cb2, Y2)(Cr2, Y3)(Cb4, Y4), and so on (or averaging
is used).

(c) The scheme “4:1:1” subsamples horizontally by a factor of


4.

(d) The scheme “4:2:0” subsamples in both the horizontal and


vertical dimensions by a factor of 2. Theoretically, an
average chroma pixel is positioned between the rows and
columns as shown Fig.5.6.

• Scheme 4:2:0 along with other schemes is commonly used


in JPEG and MPEG (see later chapters in Part 2).
35
Fundamentals of Multimedia, Chapter 5

Fig. 5.6: Chroma subsampling


36
Fundamentals of Multimedia, Chapter 5

CCIR Standards for Digital Video


• CCIR is the Consultative Committee for International
Radio, and one of the most important standards it has
produced is CCIR-601, for component digital video.

– This standard has since become standard ITU-R-601, an


international standard for professional video applications
— adopted by certain digital video formats including the popular DV
video.

• Table 5.3 shows some of the digital video specifications,


all with an aspect ratio of 4:3. The CCIR 601 standard
uses an interlaced scan, so each field has only half as
much vertical resolution (e.g., 240 lines in NTSC).
37
Fundamentals of Multimedia, Chapter 5

• CIF stands for Common Intermediate Format specified


by the CCITT (International Telegraph and Telephone
Consultative Committee).

(a) The idea of CIF is to specify a format for lower bitrate.

(b) CIF is about the same as VHS quality. It uses a


progressive (non-interlaced) scan.

(c) QCIF stands for “Quarter-CIF”. All the CIF/QCIF


resolutions are evenly divisible by 8, and all except 88
are divisible by 16; this provides convenience for block-
based video coding in H.261 and H.263, discussed later
in Chapter 10.
38
Fundamentals of Multimedia, Chapter 5

(d) Note, CIF is a compromise of NTSC and PAL in


that it adopts the ‘NTSC frame rate and half of the
number of active lines as in PAL.

Table 5.3: Digital video specifications


CCIR 601 CCIR 601 CIF QCIF
525/60 625/50
NTSC PAL/SECAM
Luminance resolution 720 x 480 720 x 576 352 x 288 176 x 144
Chrominance resolution 360 x 480 360 x 576 176 x 144 88 x 72
Colour Subsampling 4:2:2 4:2:2 4:2:0 4:2:0
Fields/sec 60 50 30 30
Interlaced Yes Yes No No

39
Fundamentals of Multimedia, Chapter 5

HDTV (High Definition TV)


• The main thrust of HDTV (High Definition TV) is not to increase the
“definition” in each unit area, but rather to increase the visual field
especially in its width.

(a) The first generation of HDTV was based on an analog technology developed
by Sony and NHK in Japan in the late 1970s.

(b) MUSE (MUltiple sub-Nyquist Sampling Encoding) was an improved NHK


HDTV with hybrid analog/digital technologies that was put in use in the 1990s.
It has 1,125 scan lines, interlaced (60 fields per second), and 16:9 aspect ratio.

(c) Since uncompressed HDTV will easily demand more than 20 MHz bandwidth,
which will not fit in the current 6 MHz or 8 MHz channels, various
compression techniques are being investigated.

(d) It is also anticipated that high quality HDTV signals will be transmitted using
more than one channel even after compression.

40
Fundamentals of Multimedia, Chapter 5

• A brief history of HDTV evolution:

(a) In 1987, the FCC decided that HDTV standards must be compatible with the
existing NTSC standard and be confined to the existing VHF (Very High
Frequency) and UHF (Ultra High Frequency) bands.

(b) In 1990, the FCC announced a very different initiative, i.e., its preference for a
full-resolution HDTV, and it was decided that HDTV would be simultaneously
broadcast with the existing NTSC TV and eventually replace it.

(c) Witnessing a boom of proposals for digital HDTV, the FCC made a key
decision to go all-digital in 1993. A “grand alliance” was formed that included
four main proposals, by General Instruments, MIT, Zenith, and AT&T, and by
Thomson, Philips, Sarnoff and others.

(d) This eventually led to the formation of the ATSC (Advanced Television
Systems Committee) — responsible for the standard for TV broadcasting of
HDTV.

(e) In 1995 the U.S. FCC Advisory Committee on Advanced Television Service
recommended that the ATSC Digital Television Standard be adopted.

41
Fundamentals of Multimedia, Chapter 5

• The standard supports video scanning formats shown in


Table 5.4. In the table, “I” mean interlaced scan and “P”
means progressive (non-interlaced) scan.

Table 5.4: Advanced Digital TV formats supported by ATSC


1920x1080 (1080p) – Standard HDTV
# of Active # of Active Aspect Ratio Picture Rate
Pixels per line Lines
1,920 1,080 16:9 60I 30P 24P
1,280 720 16:9 60P 30P 24P
704 480 16:9 & 4:3 60I 60P 30P 24P
640 480 4:3 60I 60P 30P 24P

42
Fundamentals of Multimedia, Chapter 5

• For video, MPEG-2 is chosen as the compression


standard. For audio, AC-3 is the standard. It supports
the so-called 5.1 channel Dolby surround sound, i.e.,
five surround channels plus a subwoofer channel.

• The salient difference between conventional TV and


HDTV:

(a) HDTV has a much wider aspect ratio of 16:9 instead of


4:3.

(b) HDTV moves toward progressive (non-interlaced) scan.


The rationale is that interlacing introduces serrated edges
to moving objects and flickers along horizontal edges.

43
Fundamentals of Multimedia, Chapter 5

• The FCC planned to replace all analog broadcast


services with digital TV broadcasting by the year
2009. The services provided will include:

– SDTV (Standard Definition TV): the current NTSC


TV or higher.

– EDTV (Enhanced Definition TV): 480 active lines or


higher, i.e., the third and fourth rows in Table 5.4.

– HDTV (High Definition TV): 720 active lines or


higher.

44
Fundamentals of Multimedia, Chapter 5

我国数字电视时间表
我国有线电视数字化将分四阶段在从东到西的三个区域进行:

• 第一阶段到2005年,直辖市、东部地区级以上城市、中部省会城市和
地区级城市,以及西部省会城市向数字化过渡;

• 第二阶段到2008年,地市级城市和西部少数城市普及数字化;

• 第三阶段到2010年,数字化扩展到东部县以上和中部大部分县以上的
城市;

• 第四阶段到2015年,西部地区基本实现数字化并力争完成数字化的整
体过渡。根据国家广电总局的计划,我国2015年停止模拟电视播出,
全面实现数字电视的全国覆盖。

45
Aspect ratio (宽高比)
• 画面的长宽比,Width/Height
• Television: 4:3=1.33, 16:9 = 1.78:1
• Film: Cinema uses 1.85:1 or 2.35:1

46
Problems with Aspect Ratio
• What if the display AR is less than the
production AR? (16:94:3)

47
Letterboxing
• Simply shrinking vertically to fit

48
Cropping
• Just cutting to fit…

49
Examples

Widescreen

50
Pan and Scan
• We can move the window

51
Fundamentals of Multimedia, Chapter 5

Terminology

Analog TV – encoded A/V information


transmitted via analog signal; function of
amplitude & frequency

Digital TV – developed in 1990s, allows for


better quality TV & more programming
choices

HDTV – Broadcast at higher resolution (720p or


higher)
52
Fundamentals of Multimedia, Chapter 5

Terminology
Resolution – measures an images detail in pixels

Bandwidth – aka bit rate, measures how much


data can be transmitted over a medium

I “Interlaced” – odd & even lines in frame drawn


alternately

P “Progressive” – all lines in frame drawn in


sequence; requires higher bandwidth
53
Fundamentals of Multimedia, Chapter 5

Ex 1
NTSC video has 525 lines per frame and 63.6 μsec
per line, with 20 lines per field of vertical retrace
and 10.9 μsec horizontal retrace.
(a) Where does the 63.6 μsec come from?
(b) Which takes more time, horizontal retrace or
vertical retrace? How much more time?
1/(525lines/frame×29.97frame/sec)=63.6×10−6 sec/line
horiz = 10.9×10−6 sec
vertical is 20 line * 63.6μsec = 1272μsec = 1.272 msec,
so vertical is 1272/10.9 = 117 times longer than horizontal.
Fundamentals of Multimedia, Chapter 5

Ex 2
Which do you think has less detectable flicker,
PAL in Europe or NTSC is North America?
Justify your conclusion

PAL could be better since more lines, but is worse because of


fewer frames/sec
Fundamentals of Multimedia, Chapter 5

Ex 3
Sometimes the signals for television are combined
into fewer than all the parts required for TV
transmission.
(a) Altogether, how many and what are the signals
used for studio broadcast TV?
(b) How many and what signals are used in S-
Video? What does S-Video stand for?
(c) How many signals are actually broadcast for
standard analog TV reception? What kind of
video is that called?
Fundamentals of Multimedia, Chapter 5

R, G, B, audio, sync
Luminance+chrominance+audio+sync
Separated video
1, Composite
Fundamentals of Multimedia, Chapter 5

Ex 4
Show how the Q signal can be extracted from the
NTSC chroma signal C (Eq. 5.1) during the
demodulation process.
(a) Multiply the signal C by 2sin(Fsct)
(b) Apply a low-pass filter to obtain Q and discard the two higher
frequency (2Fsc) terms.
Fundamentals of Multimedia, Chapter 5

Ex 6
We don’t see flicker on a workstation screen when displaying
video at NTSC frame rate. Why do you think this might be?

NTSC video is displayed at 30 frames per sec, so flicker is


possibly present. Nonetheless, when video is displayed on a
workstation screen the video buffer is read and then
rendered on the screen at a much higher rate, typically the
refresh rate — 60 to 90 Hz — so no flicker is perceived.
(And in fact most display systems have double buffers,
completely removing flicker: since main memory is much
faster than video memory, keep a copy of the screen in main
memory and then when this buffer update is complete, the
whole buffer is copied to the video buffer.)
Fundamentals of Multimedia, Chapter 5

Ex 7
Digital video uses chroma subsampling. What is
the purpose of this? Why is it feasible?
Human vision has less acuity in color vision than it has in
black and white. Therefore, it is acceptable perceptually
to remove a good deal of color information. In analog,
this is accomplished in broadcast TV by simply assigning
a smaller frequency bandwidth to color than to black and
white information. In digital, we “decimate” the color
signal by subsampling (typically, averaging nearby
pixels).
The purpose is to have less information to transmit or store.
Fundamentals of Multimedia, Chapter 5

Ex 8
What are the most salient differences between
ordinary TV and HDTV? What was the main
impetus for the development of HDTV?

More pixels, and aspect ratio of 16/9 rather than 4/3.


Immersion — “being there”. Good for interactive
systems and applications such as virtual reality.
Fundamentals of Multimedia, Chapter 5

Ex 9
What is the advantage of interlaced video? What
are some of its problems?

Positive: Reduce flicker.


Negative: Introduces serrated edges to moving objects and
flickers along horizontal edges.
Fundamentals of Multimedia, Chapter 5

Ex 10
One solution that removes the problems of
interlaced video is to de-interlace it. Why can we
not just overlay the two fields to obtain a de-
interlaced image? Suggest some simple de-
interlacing algorithms that retain information
from both fields.
The second field is captured at a later time than the first, creating
a temporal shift between the odd and even lines of the image.
The methods used to overcome this are basically two: non-motion
compensated and motion compensated de-interlacing
algorithms
Fundamentals of Multimedia, Chapter 5

The simplest non-motion compensated algorithm is called “Weave”; it


performs linear interpolation between the fields to fill in a full,
“progressive”, frame. A defect with this method is that moving edges show
up with significant serrated lines near them.

A better algorithm is called “Bob”: in this algorithm, one field is discarded and
a a full frame is interpolated from a single field. This method generates no
motion artifacts (but of course detail is reduced in the resulting progressive
image).

In a vertical-temporal (VT) de-interlacer, vertical detail is reduced for higher


temporal frequencies. Other, non-linear techniques are also used.

Motion compensated de-interlacing performs inter-field motion compensation


and then combines fields so as to maximize the vertical resolution of the
image.

You might also like