Multimedia UNIT III
Multimedia UNIT III
CHAPTER 5:ANIMATION
Definition :
● Animation makes static presentations come alive.
● It is visual change over time and can add great power to your multimedia projects and
web pages.
● Many multimedia applications for both Macintosh and Windows provide animation
tools.
● Animate your whole project, or you can animate here and there, accenting and adding
spice.
● For a brief product demonstration with little user interaction, it might make sense to
design the entire project as a video and keep the presentation always in motion.
● For speaker support, you can animate bulleted text or fly it onto the screen, or you can
use charts with quantities that grow or dwindle; then, give the speaker control of these
eye-catchers.
● Visual effects such as wipes, fades, zooms, and dissolves are avail- able in most
multimedia authoring packages, and some of these can be used for primitive animation.
● Animation is an object actually moving across or into or out of the screen;
Principles of Animation
1
Figure 5-1 Animation authoring applications typically offer many visual effects and transitions.
● Digital television video builds 24, 30, or 60 entire frames or pictures every second,
depending upon settings; the speed with which each frame is replaced by the next one
makes the images appear to blend smoothly into movement.
● Movies on film are typically shot at a shutter rate of 24 frames per second, but using
projection tricks (the projector’s shutter flashes light through each image twice), the
flicker rate is increased to 48 times per second, and the human eye thus sees a motion
picture.
Animation by Computer
● Using appropriate software and techniques, you can animate visual images in many ways.
● The simplest animations occur in two-dimensional (2-D) space; more complicated
animations occur in an intermediate “2½-D” space (where shadowing, highlights, and
forced perspective provide an illusion of depth, the third dimension); and the most
realistic animations occur in three-dimensional (3-D) space.
2
● In 2-D space, the visual changes that bring an image alive occur on the flat Cartesian x
and y axes of the screen.
● A blinking word, a color-cycling logo (where the colors of an image are rapidly altered
according to a formula), a cel animation (described more fully later on in this chapter), or
a button or tab that changes state on mouse rollover to let a user know it is active are all
examples of 2-D animations.
● These are simple and static, not changing their position on the screen.
● Path animation in 2-D space increases the complexity of an animation and provides
motion, changing the location of an image along a predetermined path (position) during a
specified amount of time (speed).
● Authoring and presentation software such as Flash or PowerPoint provide user-friendly
tools to compute position changes and redraw an image in a new location, allowing you
to generate a bouncing ball or slide a corporate mascot onto the screen.
● Combining changes in an image with changes in its position allows you to “walk” your
corporate mascot onto the stage. Changing its size from small to large as it walks onstage
will give you a 3-D perception of distance.
● In 2½-D animation, an illusion of depth (the z axis) is added to an image through
shadowing and highlighting, but the image itself still rests on the flat x and y axes in two
dimensions.
● Embossing, shadowing, bevel- ing, and highlighting provide a sense of depth by raising
an image or cut- ting it into a background. Zaxwerks’ 3D Invigorator
(www.zaxwerks.com), for example, provides 3-D effects for text and images and, while
calling itself “3D,” works within the 2-D space of image editors and drawing pro- grams
such as Adobe Illustrator, Photoshop, Fireworks, and After Effects.
● In 3-D animation, software creates a virtual realm in three dimensions, and changes
(motion) are calculated along all three axes (x, y, and z), allowing an image or object that
itself is created with a front, back, sides, top, and bottom to move toward or away from
the viewer, or, in this virtual space of light sources and points of view, allowing the
viewer to wander around and get a look at all the object’s parts from all angles.
● Such animations are typically rendered frame by frame by high-end 3-D animation
programs such as NewTek’s Lightwave or AutoDesk’s Maya.
Animation Techniques
● When we create an animation, organize its execution into a series of logical steps.
● First, gather up in your mind all the activities you wish to provide in the animation.
Cel Animation
● The animation techniques made famous by Disney use a series of progressively different
graphics or cels on each frame of movie film (which plays at 24 frames per second).
● A minute of animation may thus require as many as 1,440 separate frames, and each
frame may be composed of many layers of cels.
● The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which have been replaced today by layers of digital imagery.
3
● Cel animation artwork begins with keyframes (the first and last frame of an action).
● The series of frames in between the keyframes are drawn in a process called tweening.
● Tweening is an action that requires calculating the number of frames between keyframes
and the path the action takes, and then actually sketching with pencil the series of
progressively different outlines.
● As tweening progresses, the action sequence is checked by flipping through the frames.
● The penciled frames are assembled and then actually filmed as a pencil test to check
smoothness, continuity, and timing.
● Today, animators use reflective sensors applied to a person, animal, or other object whose
motion is to be captured.
● Cameras and computers convert the precise locations of the sensors into x,y,z coordinates
and the data is rendered into 3-D surfaces moving over time.
Computer Animation
● Computer animation programs typically employ the same logic and procedural concepts
as cel animation and use the vocabulary of classic cel animation—terms such as layer,
keyframe, and tweening.
● The primary difference among animation software programs is in how much must be
drawn by the animator and how much is automatically generated by the software
● In path-based 2-D and 2½-D animation, an animator simply creates an object (or imports
an object as clip art) and describes a path for the object to follow.
● The computer software then takes over, actually creating the animation on the fly as the
program is being viewed by your user.
● In cel-based 2-D animation, each frame of an animation is provided by the animator, and
the frames are then composited (usually with some tweening help available from the
software) into a single file of images to be played in sequence.
● ULead’s GIF Animator (www.ulead.com/ga) and Alchemy’s GIF Construction Set Pro
(www .mindworkshop.com) simply string together your collection of frames.
● For 3-D animation, most of your effort may be spent in creating the models of individual
objects and designing the characteristics of their shapes and surfaces.
● It is the software that then computes the movement of the objects within the 3-D space
and renders each frame, in the end stitching them together in a digital output file or
container such as an AVI or QuickTime movie.
● On the computer, paint is most often filled or drawn with tools using features such as
gradients and anti-aliasing.
● The word inks, in computer animation terminology, usually means special methods for
computing color values, providing edge detection, and layering so that images can blend
or otherwise mix their colors to produce special transparencies, inversions, and effects.
● Usually set your own frame rates on the computer. 2-D cel- based animated GIFs, for
example, allow you to specify how long each frame is to be displayed and how many
times the animation should loop before stopping.
● 3-D animations output as digital video files can be set to run at 15 or 24 or 30 frames
per second.
4
● However, the rate at which changes are computed and screens are actually refreshed will
depend on the speed and power of your user’s display platform and hardware, especially
for animations such as path animations that are being generated by the computer on the
fly.
● Although your animations will probably never push the limits of a monitor’s scan rate
(about 60 to 70 frames per second), animation does put raw computing horsepower to
task.
● 3-D animations are typically delivered as “pre-rendered” digital video clips. Software
such as Flash or PowerPoint, however, render animations as they are being viewed, so the
animation can be programmed to be interactive: touch or click on the jumping cat and it
turns toward you snarling; touch the walking woman and…
Kinematics
● Kinematics is the study of the movement and motion of structures that have joints, such
as a walking man.
● Animating a walking step is tricky: you need to calculate the position, rotation, velocity,
and acceleration of all the joints and articulated parts involved—knees bend, hips flex,
shoulders swing, and the head bobs.
● Smith Micro’s Poser (http://my.smithmicro.com), a 3-D modeling program, provides pre-
assembled adjustable human models (male, female, infant, teenage, and superhero) in
many poses, such as “walking” or “thinking.” As you can see in Figure 5-3, you can pose
figures in 3-D and then scale and manipulate individual body parts. Surface textures can
then be applied to create muscle-bound hulks or smooth chrome androids.
● Inverse kinematics, available in high-end 3-D programs such as Lightwave and Maya, is
the process by which you link objects such as hands to arms and define their relationships
and limits (for example, elbows cannot bend backward).
● Once those relationships and parameters have been set, you can then drag these parts
around and let the computer calculate the result.
Morphing
● Morphing is a popular (if not overused) effect in which one image transforms into
another.
● Morphing applications and other modeling tools that offer this effect can transition not
only between still images but often between moving images as well.
● Some products that offer morphing features are Black Belt’s Easy Morph and WinImages
(www.black- beltsystems.com) and Human Software’s Squizz (www.humansoftware.
com).
● The morphed images were built at a rate of eight frames per second, with each transition
taking a total of four seconds (32 separate images for each transition), and the number of
key points was held to a minimum to shorten rendering time.
● Setting key points is crucial for a smooth transition between two images.
A Rolling Ball
● First, create a new, blank image file that is 100 × 100 pixels, and fill it with a sphere.
● Create a new layer in Photoshop, and place some white text on this layer at the center of
the image.
● To animate the sphere by rolling it across the screen, you first need to make a number of
rotated images of the sphere. Rotate the image in 45-degree increments to create a total of
eight images, rotating a full circle of 360 degrees. When each is displayed sequentially at
● For a realistic rolling effect, the circumference (calculated at pi times 100, or about 314
7
● As each image is successively displayed, the ball is moved 40 pixels along a line.
A Bouncing Ball
● With the simplest tools, you can make a bouncing ball to animate your web site using
GIF89a, an image format that allows multiple images to be put into a single file and then
displayed as an animation in a web browser or presentation program that recognizes the
format.
● The individual frames that make up the animated GIF can be created in any paint or
image-processing program, but it takes a specialized program to put the frames together
into a GIF89a animation.
● As with the rolling ball example, you simply need to flash a ball on the computer screen
rapidly and in a different place each time to make it bounce up and down.
● And as with the rolling ball, where you should compute the circumference of the ball and
divide by the number of images to determine how far it rolls each time it flashes, there
are some commonsense computations to consider with a bouncing ball, too. In the
formula, s equals distance, a equals acceleration due to gravity, and t equals time:
s = ½ at 2
● Gravity makes your bouncing ball accelerate on its downward course and decelerate on
its upward course (when it moves slower and slower until it actually stops and then
accelerates downward again).
● Unless your animation requires precision, ignore the hard numbers you learned in high
school (like 32 feet per second per second), and simply figure that your ball will
uniformly accelerate and decelerate up and down the pixels of your screen by the squares:
1, 4, 9, 16, 25, 36, 49, 64,81, 100 are the squares of 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10.
● With a bit of programming, you might allow the user to choose the elasticity of the
object, the amount of gravity, and the length of fall. Some animation software provides
tools for this: it’s called “easing.”
● Open a graphics program and paint a ball about 15 pixels in diameter (if you have an
odd-number diameter, there is a middle pixel that can be your center alignment point).
● Fancy, make the ball with a 3-D graphics tool that will shade it as a sphere. Then
duplicate the ball, placing each copy of it in a vertical line at the ten locations 1, 4, 9, 16,
25, 36, 49, 64, 81, and 100. The goal is to create a separate image file for each location of
the ball, like the pages of a flip-book.
● With Photoshop, you can create a single file with ten layers to contain each ball at its
proper location, and you can add an eleventh background layer, too. Then save each layer
showing against the background as a separate file.
● This is a construction process also easily managed with Director or Flash, in which you
can place the same cast member or object (the ball) where you wish on the presentation
stage.
● Also add a background and other art elements, and each frame as a graphics file using the
export function. probably also wish to set the size of your stage to a small area just
sufficient for your animation, say 32 120 pixels. The smaller the better if users will be
downloading this animated GIF file into their web browsers.
8
Creating an Animated Scene
CHAPTER 6: VIDEO
Definition:
● Motion video is the element of multimedia that can draw gasps from a crowd at a trade
show or firmly hold a student’s interest in a computer-based learning project.
● Digital video is the most engaging of multimedia venues, and it is a powerful tool for
bringing computer users closer to the real world.
● It is also an excel- lent method for delivering multimedia to an audience raised on
television.
● Video standards and formats are still being refined as transport, stor- age, compression,
and display technologies take shape in laboratories and in the marketplace and while
equipment and post-processing evolves from its analog beginnings to become fully
digital, from capture to display.
● The multimedia elements, video places the highest performance demand on your
computer or device—and its memory and storage.
● Compression (and decompression), using special software called a codec, allows a
massive amount of imagery to be squeezed into a comparatively small data file, which
can still deliver a good viewing experience on the intended viewing platform during
playback.
● Control the delivery platform for your multimedia project, you can specify special
hardware and software enhancements that will allow you to work with high-definition,
full-motion video, and sophisticated audio for high-quality surround sound.
● Or you can design a project to meet a specific compression standard, such as MPEG2 for
Digital Versatile Disc (DVD) playback or MPEG4 for home video.
● You can install a superfast RAID (Redundant Array of Independent Disks) system that
will support high-speed data transfer rates.
● You can include instructions in your authoring system that will spool video clips into
RAM, ready for high-speed playback before they need to play.
● Having control of the playback platform is always good, but it is seldom available in the
real world, so as you develop your video elements, you will need to make many choices
9
and compromises based upon your assessment of the “lowest common denominator”
playback platform where your project will be used.
❖ When light reflected from an object passes through a video camera lens, that light is
converted into an electronic signal by a special sensor called a charge-coupled device
(CCD).
❖ Top-quality broadcast cameras and even camcorders may have as many as three CCDs
(one for each color of red, green, and blue) to enhance the resolution of the camera and
the quality of the image.
❖ It’s important to understand the difference between analog and digital video.
❖ Analog video has a resolution measured in the number of horizontal scan lines (due to the
nature of early cathode-tube cameras), but each of those lines represents continuous
measurements of the color and brightness along the horizontal axis, in a linear signal that
is analogous to an audio signal.
❖ Digital video signals consist of a discrete color and brightness (RGB) value for each
pixel.
❖ Digitizing analog video involves reading the analog signal and breaking it into separate
data packets. This process is similar to digitizing audio, except that with video the
vertical resolution is limited to the number of horizontal scan lines.
❖ For some multimedia projects you may need to digitize legacy analog video.
❖ The following discussion will help you understand the differences between analog and
digital video and the old and new standards for horizontal lines, aspect ratios, and
interlacing.
Analog Video
❖ In an analog system, the output of the CCD is processed by the camera into three
channels of color information and synchronization pulses (sync) and the signals are
recorded onto magnetic tape.
❖ There are several video standards for managing analog CCD output, each dealing with
the amount of separation between the components—the more separation of the color
information, the higher the quality of the image (and the more expensive the equipment).
❖ If each channel of color information is transmitted as a separate signal on its own
conductor, the signal output is called component (separate red, green, and blue
channels), which is the preferred method for higher-quality and professional video work.
❖ Lower in quality is the signal that makes up Separate Video (S-Video), using two
channels that carry luminance and chrominance information.
❖ The least separation (and thus the lowest quality for a video signal) is composite, when
all the signals are mixed together and carried on a single cable as a composite of the three
color channels and the sync signal.
❖ The composite signal yields less-precise color definition, which cannot be manipulated or
color-corrected as much as S-Video or component signals.
10
❖ The analog video and audio signals are written to tape by a spinning recording head that
changes the local magnetic properties of the tape’s surface in a series of long diagonal
stripes.
❖ Because the head is canted or tilted at a slight angle compared with the path of the tape, it
follows a helical (spiral) path, which is called helical scan recording.
❖ A single video frame is made up of two fields that are interlaced (described in detail later
in the chapter).
❖ Audio is recorded on a separate straight-line track at the top of the videotape, although
with some recording systems (notably for ¾-inch tape and for ½-inch tape with high-
fidelity audio), sound is recorded helically between the video tracks.
❖ At the bottom of the tape is a control track containing the pulses used to regulate speed.
❖ Tracking is the fine adjustment of the tape during playback so that the tracks are properly
aligned as the tape moves across the playback lead.
Figure 6-1 Diagram of tape path across the video head for analog recording
Video head
❖ Many consumer set-top devices like video cassette recorders (VCRs) and satellite
receivers add the video and sound signals to a sub- carrier and modulate them into a radio
frequency (RF) in the FM broad- cast band. This is the NTSC, PAL, or SECAM signal
available at the Antenna Out connector of a VCR.
❖ Usually the signal is modulated on either Channel 3 or Channel 4, and the resulting signal
is demodulated by the TV receiver and displayed on the selected channel.
❖ Many television sets today also provide a composite signal connector, a S-Video
connector, and a High-Definition Multimedia Interface (HDMI) connector for purely
digital input.
❖ Video displays for computers typically provide analog component (red, green, blue) input
through a 15-pin VGA connector and also a purely digital Digital Visual Interface (DVI)
and/or an HDMI connection.
❖ Three analog broadcast video standards are commonly in use around the world: NTSC,
PAL, and SECAM.
11
❖ In the United States, the NTSC standard has been phased out, replaced by the ATSC
Digital Television Standard.
❖ Because these standards and formats are not easily interchange- able, it is important to
know where your multimedia project will be used.
❖ A video cassette recorded in the United States (which uses NTSC) will not play on a
television set in any European country (which uses either PAL or SECAM), even though
the recording method and style of the cassette is “VHS.” Likewise, tapes recorded in
European PAL or SECAM formats will not play back on an NTSC video cassette
recorder.
❖ Each system is based on a different standard that defines the way information is encoded
to produce the electronic signal that ultimately creates a television picture.
❖ Multiformat VCRs can play back all three standards but typically cannot dub from one
standard to another. Dubbing between standards still requires high-end, specialized
equipment.
NTSC
❖ The United States, Canada, Mexico, Japan, and many other countries used a system for
broadcasting and displaying video that is based upon the specifications set forth by the
1952 National Television Standards Committee (NTSC).
❖ These standards defined a method for encoding information into the electronic signal that
ultimately created a television picture.
❖ As specified by the NTSC standard, a single frame of video was made up of 525
horizontal scan lines drawn onto the inside face of a phosphor-coated picture tube every
1/30th of a second by a fast-moving electron beam.
❖ The drawing occurred so fast that your eye would perceive the image as stable. The
electron beam actually made two passes as it drew a single video frame—first it laid
down all the odd-numbered lines, and then all the even-numbered lines.
❖ Each of these passes (which happen at a rate of 60 per second, or 60 Hz) painted a field,
and the two fields were then combined to create a single frame at a rate of 30 frames per
second (fps). (Technically, the speed is actually 29.97 Hz.)
PAL
❖ The Phase Alternate Line (PAL) system was used in the United Kingdom, Western
Europe, Australia, South Africa, China, and South America.
❖ PAL increased the screen resolution to 625 horizontal lines, but slowed the scan rate to
25 frames per second.
❖ As with NTSC, the even and odd lines were interlaced, each field taking 1/50 of a second
to draw (50 Hz).
SECAM
❖ The Sequential Color and Memory (SECAM) (taken from the French name, reported
variously as Système Électronic pour Couleur Avec Mémoire or Séquentiel Couleur
12
Avec Mémoire) system was used in France, Eastern Europe, the former USSR, and a few
other countries.
❖ Although SECAM is a 625-line, 50 Hz system, it differed greatly from both the NTSC
and the PAL color systems in its basic technology and broadcast method.
❖ Often, however, TV sets sold in Europe utilized dual components and could handle both
PAL and SECAM systems.
Digital Video
❖ In digital systems, the output of the CCD is digitized by the camera into a sequence of
single frames, and the video and audio data are compressed before being written to a tape
or digitally stored to disc or flash memory in one of several proprietary and competing
formats.
❖ Digital video data formats, especially the codec used for compressing and decompressing
video (and audio) data, are important;
In 1995, Apple’s FireWire technology was standardized as IEEE 1394, and Sony quickly
adopted it for much of its digital camera line under the name i.Link. FireWire and i.Link (and
USB 2) cable connections allow a completely digital process, from the camera’s CCD to the
hard disk of a computer; and camcorders store the video and sound data on an onboard digital
tape, writable mini-DVD, mini–hard disk, or flash memory.
HDTV
What started as the High Definition Television (HDTV) initiative of the Federal
Communications Commission in the 1980s changed first to the Advanced Television (ATV)
initiative and then finished as the Digital Television (DTV) initiative by the time the FCC
announced the change in 1996. This standard, which was slightly modified from both the Digital
Television Standard (ATSC Doc. A/53) and the Digital Audio Compression Standard (ATSC
Doc. A/52), moved U.S. television from an analog to a digital standard. It also provided TV
stations with sufficient bandwidth to present four or five Standard Television (STV, providing
the NTSC’s resolution of 525 lines with a 3:4 aspect ratio, but in a digital signal) signals or one
HDTV signal (providing 1,080 lines of resolution with a movie screen’s 16:9 aspect ratio).
HDTV provides high resolution in a 16:9 aspect ratio (see Figure 6-3). This aspect ratio allows
the viewing of Cinemascope and Panavision movies. There was contention between the
broadcast and computer industries about whether to use interlacing or progressive-scan
technologies. The broadcast industry promulgated an ultra-high-resolution, 1920 1080
interlaced format (1080i) to become the cornerstone of the new generation of high-end
entertainment centers, but the computer industry wanted a 1280 720 progressive-scan system
(720p) for HDTV. While the 1920 1080 format provides more pixels than the 1280 720
standard, the refresh rates are quite different. The higher- resolution interlaced format delivers
only half the picture every 1/60 of a second, and because of the interlacing, on highly detailed
images there is a great deal of screen flicker at 30 Hz. The computer people argue that the picture
quality at 1280 720 is superior and steady. Both formats have been included in the HDTV
13
standard by the Advanced Television Systems Committee (ATSC), found at www.atsc.org.
In 1995, Apple’s FireWire technology was standardized as IEEE 1394, and Sony quickly
adopted it for much of its digital camera line under the name i.Link. FireWire and i.Link (and
USB 2) cable connec- tions allow a completely digital process, from the camera’s CCD to the
hard disk of a computer; and camcorders store the video and sound data on an onboard digital
tape, writable mini-DVD, mini–hard disk, or flash memory.
HDTV
Displays
14
● Colored phosphors on a cathode ray tube (CRT) screen glow red, green, or blue when
they are energized by an electron beam.
● Because the intensity of the beam varies as it moves across the screen, some colors glow
brighter than others.
● Finely tuned magnets around the picture tube aim the electrons precisely onto the
phosphor screen, while the intensity of the beam is varied according to the video signal.
● This is why you needed to keep speakers (which have strong magnets in them) away
from a CRT screen. A strong external magnetic field can skew the electron beam to one
area of the screen and sometimes caused a permanent blotch that cannot be fixed by
degaussing—an electronic process that readjusts the magnets that guide the electrons.
● Flat screen displays are all-digital, using either liquid crystal display (LCD) or plasma
technologies, and have supplanted CRTs for computer use.
● Some professional video producers and studios, however, prefer CRTs to flat screen
displays, claiming colors are brighter and more accurately reproduced.
● Full integration of digital video in cameras and on computers eliminates the analog
television form of video, from both the multimedia production and the delivery platform.
● If your video camera generates a digital output signal, you can record your video direct-
to-disk, where it is ready for editing.
● If a video clip is stored as data on a hard disk, CD-ROM, DVD, or other mass-storage
device, that clip can be played back on a computer’s monitor without special hardware.
● The process of building a single frame from two fields is called interlacing, a technique
that helps to prevent flicker on CRT screens.
● Computer monitors use a different progressive-scan technology, and draw the lines of an
entire frame in a single pass, without interlacing them and without flicker.
● In television, the electron beam actually makes two passes on the screen as it draws a
single video frame, first laying down all the odd-numbered lines, then all the even-
numbered lines, as they are interlaced.
● On a computer monitor, lines are painted one-pixel thick and are not interlaced. Single-
pixel lines displayed on a computer monitor look fine; on a television, these thin lines
flicker brightly because they only appear in every other field.
● To prevent this flicker on CRTs, make sure your lines are greater than two pixels thick
and that you avoid typefaces that are very thin or have elaborate serifs.
● Capturing images from a video signal, you can filter them through a de-interlacing filter
provided by image-editing applications such as Photoshop and Fireworks. With
typefaces, interlacing flicker can often be avoided by anti-aliasing the type to slightly blur
the edges of the characters.
● The term “interlacing” has a different meaning on the Web, where it describes the
progressive display of lines of pixels as image data is downloaded, giving the impression
that the image is coming from blurry into focus as increasingly more data arrives
● Most computers today provide video outputs to CRT, LCD, or plasma monitors at greater
than 1024 × 768 resolution.
15
● Table 6-1 describes the various aspect ratios and width/heights in pixels used by
computer displays since IBM’s VGA standard was adopted in 1987.
● The VGA’s once ubiquitous 640 × 480 screen resolution is again becoming common for
handheld and mobile device displays.
Acrony Name Aspect Wi Hei
m Ratio dth ght
(pi (pi
xel xel
s) s)
VGA Video Graphics Array 4:3 640 480
SVGA Super Video Graphics Array 4:3 800 600
XGA eXtended Graphics Array 4:3 1024 768
XGA+ eXtended Graphics Array Plus 4:3 1152 864
WXGA Widescreen eXtended Graphics Array 5:3 1280 768
WXGA Widescreen eXtended Graphics Array 8:5 (16:10) 1280 800
SXGA Super eXtended Graphics Array 4:3 1280 960
SXGA Super eXtended Graphics Array 5:4 1280 1024
HD High Definition (Basic) 16:9 1366 768
WSXGA Widescreen Super eXtended Graphics Array 8:5 (16:10) 1440 900
HD+ High Definiton (Plus) 16:9 1600 900
UXGA Ultra eXtended Graphics Array 4:3 1600 1200
WSXGA+ Widescreen Super eXtended Graphics Array Plus 8:5 (16:10) 1680 1050
HD-1080 Full High Definition 16:9 1920 1080
WUXGA Widescreen Ultra eXtended Graphics Array 8:5 (16:10) 1920 1200
● It is common practice in the television industry to broadcast an image larger than will fit
on a standard TV screen so that the “edge” of the image seen by a viewer is always
bounded by the TV’s physical frame, or bezel.
● This is called overscan. In contrast, computer monitors display a smaller image on the
monitor’s picture tube (underscan), leaving a black border inside the bezel.
● Consequently, when a digitized video image is displayed on a CRT, there is a border
around the image; and, when a computer screen is converted to video, the outer edges of
the image will not fit on a TV screen
● . Only about 360 of the 480 lines of the computer screen will be visible. Video editing
software often will show you the safe areas while you are editing.
16
● A digital video architecture is made up of an algorithm for compressing and encoding
video and audio, a container in which to put the compressed data, and a player that can
recognize and play back those files.
● Common containers for video are Ogg (.ogg, Theora for video, Vorbis for audio), Flash
Video (.flv), MPEG (.mp4), QuickTime (.mov), Windows Media Format (.wmv), WebM
(.webm), and RealMedia (.rm).
● Containers may include data compressed by a choice of codecs, and media players may
recognize and play back more than one video file container format.
● Container formats may also include metadata—important information about the tracks
contained in them—and even additional media besides audio and video.
● The QuickTime container, for example, allows inclusion of text tracks, chapter markers,
transitions, and even interactive sprites.
● Totally Hip’s LiveStage Pro is an authoring tool that can produce interactive multimedia
self-contained within a single QuickTime .mov container.
Codecs
● To digitize and store a 10-second clip of full-motion video in your computer requires the
transfer of an enormous amount of data in a very short amount of time.
● Reproducing just one frame of digital video component video at 24 bits requires almost
1MB of computer data; 30 seconds of full-screen, uncompressed video will fill a gigabyte
hard disk.
● Full-size, full-motion uncompressed video requires that the computer deliver data at
about 30MB per second.
● This overwhelming technological bottleneck is overcome using digital video compression
schemes or codecs (coders/decoders).
● A codec is the algorithm used to compress a video for delivery and then decode it in real
time for fast playback.
● Different codecs are optimized for different methods of delivery (for example, from a
hard drive, from a DVD, or over the Web). Codecs such as Theora and H.264 compress
digital video infor- mation at rates that range from 50:1 to 200:1.
● Some codecs store only the image data that changes from frame to frame instead of the
data that makes up each and every individual frame.
● Other codecs use computation- intensive methods to predict what pixels will change from
frame to frame and store the predictions to be deconstructed during playback.
● These are all lossy codecs where image quality is (somewhat) sacrificed to significantly
reduce file size.
MPEG
● The MPEG standards were developed by the Moving Picture Experts Group (MPEG,
www.mpeg.org), a working group convened by the Inter- national Organization for
Standardization (ISO) and the International Electro-technical Commission (IEC), which
created standards for the digital representation of moving pictures as well as associated
audio and other data.
17
● Using MPEG-1 (specifications released in 1992), you could deliver 1.2 Mbps (megabits
per second) of video and 250 Kbps (kilobits per second) of two-channel stereo audio
using CD-ROM technology.
● MPEG-2 (specifications released in 1994), a completely different system from MPEG-1,
required higher data rates (3 to 15 Mbps) but also deliv- ered higher image resolution,
improved picture quality, interlaced video formats, multiresolution scalability, and
multichannel audio features.
● MPEG-2 became the video compression standard required for digital television (DTV)
and for making DVDs.
● The MPEG specifications since MPEG-2 include elements beyond just the encoding of
video.
● As a container, MPEG-4 (specifications released in 1998 and 1999) provides a content-
based method for assimilating multimedia elements.
● It offers indexing, hyperlinking, querying, browsing, uploading, downloading, and
deleting functions, as well as “hybrid natural and synthetic data coding,” which will
enable harmonious integration of natural and synthetic audiovisual objects.
● With MPEG-4, multiple views, layers, and multiple sound tracks of a scene, as well as
stereoscopic and 3-D views, are available, making virtual reality workable.
● MPEG-4 can adjust to varied download speeds, making it an attractive option for delivery
of video on the Web.
● The MPEG-4 AVC standard (Advanced Video Cod- ing, Part 10) requires the H.264
codec for Blu-ray discs.
● The high bit rate requirements of video and the (relatively) low bit rates available from
CD-ROMs, and later from the Web, have led to a long and occasionally confusing
progression in the development of codecs.
● Generally, the greater the compression, the more processing “horse- power” (and waiting
time) is needed to compress and decompress the video.
● So only relatively new computers are capable of decompressing highly compressed
video at a rate that can keep up with the video data stream.
● Using the best or “latest” codecs in your project is a good idea, but it must be balanced by
ensuring that the video will play on the widest range of computers.
● The Flash video container, which uses the older VP6 and a newer H.263 codec
(depending upon version), is used by YouTube and at many web sites but requires the
Flash plug-in to be installed in the user’s browser.
● For playing WMV containers, Macintosh computers require installing the Silverlight
plug-in, a Microsoft development framework similar to Flash.
● The H.264 codec was developed by the Moving Picture Experts Group, is patented and
proprietary, and is required on Blu-ray discs and used by YouTube, iTunes, and some
broadcast services.
18
● Google’s open-source VP8 codec works within the WebM container
(www.webmproject.org), and was launched as an effort to replace Flash and H.264 on the
Web.
● Google is re-encoding all its Flash holdings at YouTube to work with WebM and VP8 as
well as with the H.264 codec.
Because of this codec and container war, for web developers wish- ing to place video elements
onto their pages, programming with the HTML5 <VIDEO> tag (which was supposed to simplify
and standard- ize inclusion of video at web sites) remains as complicated as ever .This is a
constantly changing area of development, so check these browsers from time to time to see
which codecs and containers are currently supported:
❖ DVD video uses MPEG-2 compression. Blu-ray video uses MPEG-4 AVC compression.
These are known standards and few choices are necessary: simply click “Save for DVD”
or “Save for Blu-ray.”
❖ But if you need to prepare a video file that will run on an iPod, a Droid, and an Atom-
based netbook, as well as in all web browsers, you will need to convert your material into
multiple formats.
❖ There are many free, shareware, and inexpensive file format con- verters available for
multiple platforms.
❖ There are many sources for film and video clips: a friend’s home movies may suffice, or
you can go to a “stock” footage house or a television station or movie studio.
20
❖ But acquiring footage that you do not own outright can be a nightmare—it is expensive,
and licensing rights and permissions may be difficult, if not impossible, to obtain. Each
second of video could cost $50 to $100 or more to license.
❖ It is impor- tant to understand at least the basics of video recording and editing, as well as
the constraints of using video in a multimedia project.
❖ Setting up a production environment for making digital video requires hardware that
meets minimum specifications for processing speed, data transfer, and storage.
❖ There are many considerations to keep in mind when setting up your production
environment, depending on the capabilities of your camcorder:
✔ Fast processor(s)
✔ Plenty of RAM
✔ Computer with FireWire (IEEE 1394 or i.Link) or USB connection and cables
✔ Fast and big hard disk(s)
✔ A second display to allow for more real estate for your editing software
✔ External speakers
✔ Nonlinear editing (NLE) software
✔ If shoot handheld, try to use a camera with an electronic image stabilization feature for
static shots, use a “steady-cam” balancing attachment, or use camera moves and a
moving subject to mask your lack of steadiness.
✔ Even using a rolling office chair and sitting facing the back with the camera balanced on
the chair-back makes a convenient, stable dolly.
✔ If you must shoot handheld, set the camera’s lens to the widest angle: at a wide angle,
camera motion becomes smaller relative to the field of view and is thus less apparent.
✔ Many digital camcorders will allow you to choose 4:3 or 16:9 aspect ratios for your
recording, one or the other.
✔ There is no easy way to convert between these aspect ratios, so you should decide up
front which to use in your multimedia project.
✔ There are two ways to convert from 16:9 to 4:3. The Letterbox or hard matte method
produces blank bars at top and bottom, but leaves the original image untouched;
✔ Pan and Scan, on the other hand, loses both sides of the original image. When using the
Pan and Scan method for conversion, editors will carefully pan across wide scenes to
capture the best area to show.
✔ Videographers and wide- screen moviemakers often consider a 4:3 “safe frame” area
when setting up their wide shots, knowing that their work will be converted to 4:3 for the
DVD aftermarket.
✔ Some DVDs use an anamorphic widescreen coding system to squeeze 16:9 widescreen
image data into a DVD’s standard 4:3 aspect ratio format; with a compatible player, these
“Enhanced for Widescreen Televisions” discs will play the original video properly on a
16:9 screen. Converting 16:9 to 4:3,Converting 4:3 to 16:9
Storyboarding
21
✔ Preplanning a video project is a factor that cannot be ignored without costing time loss,
lots of unnecessary aggravation, and money that would be better spent elsewhere.
✔ Successful video production, of any sort, deserves the time it takes to make a plan to
carry it out.
✔ It may take a little time at first, but you’ll find it to be very helpful in the long run.
Storyboards are like any sequential comic you read daily.
✔ Every day there are three or four panels showing a progression of story or information.
✔ Take the time to structure your production by writing it down and then engineer a
sequential group of drawings showing camera and scene, shooting angles, lighting,
action, special effects, and how objects move through from start to finish. A storyboard
can get everyone on one page quickly.
Lighting
Chroma Keys
✔ Chroma keys allow you to choose a color or range of colors that become transparent,
allowing the video image to be seen “through” the computer image.
✔ This is the technology used by a newscast’s weather person, who is shot against a blue
background that is made invisible when merged with the electronically generated image
of the weather map.
✔ The weatherman controls the computer part of the display with a small handheld
controller.
✔ A useful tool easily implemented in most digital video editing applications is blue screen,
green screen, Ultimatte, or chroma key editing.
✔ When Captain Picard of Star Trek fame walks on the surface of the moon, it is likely that
he is actually walking on a studio set in front of a screen or wall painted blue. Actually
22
placing Picard on the moon was, no doubt, beyond the budget of the shoot, but it could be
faked using blue screen techniques.
✔ After shooting the video of Picard’s walk against a blue background and shooting another
video consisting of the desired background moonscape, the two videos were mixed
together: wherever there was blue in the Picard shot, it was replaced by the background
image, frame by frame.
✔ Blue screen is a popular technique for making multimedia titles because expensive sets
are not required. Incredible backgrounds can be generated using 3-D modeling and
graphic software, and one or more actors, vehicles, or other objects can be neatly layered
onto that back- ground.
✔ Video editing applications provide the tools for this.
When you are shooting blue screen, be sure that the lighting of the screen is absolutely even;
fluctuations in intensity will make this “key”appear choppy or broken.
✔ Shooting in daylight, and letting the sun illuminate the screen, will mitigate this problem.
✔ Also be careful about “color spill.” If your actors stand too close to the screen, the
colored light reflecting off the screen will spill onto them, and parts of their body will key
out.
✔ While adjustments in most applications can compensate for this, the adjustments are
limited..
Composition
✔ The general rules for shooting quality video for broadcast use also apply to multimedia.
✔ When shooting video for playback in a small window, it is best to avoid wide panoramic
shots, as the sweeping majesty will be lost.
✔ Use close-ups and medium shots, head-and-shoulders or even tighter.
✔ Depending upon the compression algorithm used
✔ Consider also the amount of motion in the shot: the more a scene changes from frame to
frame, the more “delta” information needs to be transferred from the computer’s memory
to the screen.
✔ Keep the camera still instead of panning and zooming; let the subject add the motion to
your shot, walking, turning, talking.
✔ Beware of excessive backlighting—shooting with a window or a bright sky in the
background—is a common error in amateur video production.
✔ Many cameras can be set to automatically compensate for backlighting. If you adjust for
this, the background may be “blown out” (so bright the video signal peaks), but at least
the foreground image you’re focusing on will be visible. Of course, the best choice in this
situation is to light the foreground.
✔ Non-professional cameras are set to always adjust the iris (the open- ing in the lens) to
keep the image’s overall exposure at a constant level.
✔ When you go from a dark to light setting the camera will adjust, and you can often see
this shift. Pro cameras allow the iris setting to be locked down to avoid this.
✔ In different situations, white may not be white, depending on the color temperature
(warmth or coolness) of the light source.
23
✔ White balance corrects for bluish, orange, or greenish color casts resulting from an
uneven distribution of colors in the spectrum your eye tells you is white, but your less
forgiving digital camera says is not quite white.
✔ Many cameras automatically set white balance with best guesses, but they also offer
adjust tab
✔ Titles and text are often used to introduce a video and its content.
✔ They may also finish off a project and provide credits accompanied by a sound track.
✔ Titles can be plain and simple, or they can be storyboarded and highly designed.
✔ For plain and simple, you can use templates in an image editor and then sequence those
images into your video using your video editing software.
✔ We can create your own imagery or animations and sequence them.
✔ More elaborate titles, typical for feature films and commercial videos, can become
multimedia projects in themselves.
✔ Upasana Nattoji Roy’s title design for Director Indrajit Nattoji’s “Aagey Se Right,” for
example, began with creative ideas transitioned into a detailed storyboard and
animations and was finally rendered using AfterEffects..
If make your own, here are some suggestions for creating good titles:
o Fonts for titles should be plain, sans serif, and bold enough to be easily read.
o When you are laying text onto a dark background, use white or a light color for
the text.
o Use a drop shadow to help separate the text from the background image.
o Do not kern your letters too tightly.
o If you use underlining or drawn graphics, always make your lines at least two
pixels wide. If you use a one-pixel-wide line (or a width measured in an odd
number of pixels), the line may flicker when transferred to video due to
interlacing.
o Use parallel lines, boxes, and tight concentric circles sparingly. When you use
them, draw them large and with thick lines.
o Avoid colors like bright reds and magenta that are too “hot”; they might twinkle
and buzz.
o Neighboring colors should be markedly different in intensity. For example, use a
light blue and a dark red, but not a medium blue and a medium red.
o Keep your graphics and titles within the safe area of the screen. Remember that
CRT televisions overscan (see the earlier section “Overscan and the Safe Title
Area”).
o Bring titles on slowly, keep them on screen for a sufficient time, and then fade
them out.
o Avoid making busy title screens; use more pages or a longer sequence instead.
24
Nonlinear Editing (NLE)
✔ The video codecs used are lossy, so each time you finalize a file, it will be less true than
the original material—this is called generation loss.
✔ Because NLE software works with EDLs (edit decision lists) based upon the raw source
video, be sure you have sufficient disk space to store your original footage.
25