Camera 2014
Camera 2014
Sometimes we can make an informed decision about which acquisition format to use,
other times we take what we get and fix it in post. Both cases require an understanding of
the tradeoffs manufacturers have made in producing a camera designed to sell at a
particular price point. Without getting into why manufacturers make multiple models or why
a particular technology gets transformed into product at some point in its life cycle. Let’s try
to be agnostic.
All image acquiring technologies work within particular parameters. Some of these
parameters can be affected by throwing money, others have limits set by the current state
of the art. Knowing the difference can save beaucoup bucks and avoid being surprised
when something does not work as expected.
If I am renting equipment for a particular project my criteria will be different than
purchasing for multiple purposes, but in both cases a check list will make sure that nothing
is forgotten in the euphoria over the manufacturers latest and greatest. I like to start with
the price (my budget) and then list the “Need to haves” and the “Nice to haves”. Don’t
forget; ease of use & market share, Lenses & Audio, Codecs & Frame Rates, Resolution &
Color depth, f stops & Contrast, Size & Weight, but how do we measure fidelity? Fidelity:
“Given the best reproduction system possible how close does the image compare to the
original scene”. This is where throwing money can make a difference. The fidelity of the
best DSLR will never equal that of the best Digital Cinema camera, “across the board”,
however there may be some cases where the images are indistinguishable. If your project
is one of these cases then rent the DSLR instead of paying for features not needed.
After the glass the electronics in the camera determine the fidelity of the image recorded.
Starting at the sensor chip, size does make a difference, but is not the whole story. The
transducers on the chip change photons into electrons. The larger each pixel the more
electrons for the same ambient light, but putting a lens in front of each pixel will do the
same, also increasing the inherent sensitivity of the transducer will have the same effect.
This is one argument for 35mm sensors, the other is depth of field. So if you really need a
shallow depth of field get a big sensor, otherwise look carefully at the sensitivity spec.
Remember the sensitivity will go down as the resolution goes up. Spreading the same
amount of light across a 4K picture and then taking an SD cutout is not the same as
shooting SD with the same sensor (combining the output from approx. 16 pixels). While
electronic imagers no longer need a “shutter”, we still use the term, as in rolling or global
shutter. A rolling shutter charges the pixels sequentially a global shutter all at once. This is
an extreme example of the kind of distortion a rolling shutter can cause.
https://www.youtube.com/watch?v=6LzaPARy3uA
In the past adding the extra circuitry to enable a global shutter has not only increased cost
but also reduced sensitivity, as it was in the same plane as the sensing element thus
reducing area for light capture, this is changing and a global shutter is definitely the better
solution. In either case the speed with which the charges can be transferred to memory is
going to limit exposure times and frame rates. The 6 gigapixels/second throughput
required for 2000+ Fps recording requires special sensors.
http://vimeo.com/63490371
The other limitation is the number of photons required to get above the inherent noise level
of the electronics. The voltage coming of the sensor is an analog signal, we need to
change it to digital.
Whoops we almost forgot color! Each pixel is actually 3, or is it 4, or only 1 sensor
elements? Do we have a single chip or 3 chips. Again what are the tradeoffs, aside from
cost? You can make a smaller camera with a single chip. At one point the solution was to
use a mechanical RGB color wheel, today if you have a single chip there is a filter in front
of it, this is going to absorb some photons, with all the consequences. The layout of the
filter, one possibility is Bayer, affects the amount of information available in each color as
well as how accurately geometric shapes are rendered. I call this “snake oil”; how many
sensing elements do I need to define a point in a raster? Most manufacturers will say “one”
and define there color resolution on this basis, but as there are no full spectrum sensing
elements each element senses only one color. If I have a 1920 x1080 filter array of equal
RGB elements is this 691200 RGB pixels or 2 Megapixels as many manufacturers
contend? In order to get the 2 MP, 2/3 of the color information is coming from adjacent
locations and may not be accurate. Does this mean we can use a 4K sensor to get
accurate color information in an HD picture? Theoretically, yes, it’s only software after all.
High resolution or accurate color rendition? At screen sizes up to 50 inches, HD with
accurate colors is going to look better than 4K with moiré and false color artifacts. This
brings us to the “secret sauce”, but before that let’s get digital.
Knowing which camera can or should be used for a particular shooting situation is
important, when you can make this decision. Knowing how to get the best out of the
“footage” you have is is necessary when you can’t. Understanding how the technology
works can help in both cases.
In part one we talked about the physical construction of the sensor and tradeoffs made to
reach manufacturing goals. This concluding portion will investigate the electronics and
software.
Lens stabilization moves the image around to fit on the sensor, thus maintaining full
resolution. The down side is additional complexity, lens noise, perhaps additional lens
elements and the resulting reduction in sensitivity, as well as the relatively slow reaction
time of the mechanical system becoming visible at higher frame rates. Sensor stabilization
moves the sensor around to follow the image thus allowing for the use of any lens.The
other option is to move the image around so that the object to be stabilized stays in the
same position in the image plane. If the actual image size is enough larger than the output
image size required there will be no loss in resolution, otherwise a blow up will be required
to avoid clipping the edges. Think of a HD sensor and you only need an SD picture. The
processing power required to do this on camera is going to add heat, especially if it has to
scale the picture as well ,and will be limited in accuracy as compared to post-production
tools. www.youtube.com/watch?v=9SRCPKGRpw0
Autofocus is a great aid, when it works! Reducing the depth of field (35mm sensor) makes
autofocus systems slower to react due to the smaller area which is actually in focus.
Overshoot and pumping are caused when the scene is changing faster than AF can react
or when there is not enough information. This is visible in low light if there is no additional
infrared emitter on the camera. The most important thing about AF is the off button.
Up until now we have talked about matching the recording requirements of your project(s)
with the available budget, but the process does not stop at the camera. Integrating the
data generated by the camera into the workflow can be seamless or seem senseless.
Consider the data coming of the newest digital cinema camera at around a TByte per hour.
Has lossless data reduction been applied before recording? Are the systems in place to
decode the image (debayering, interframe reconstruction, decoding, etc.) without delaying
the workflow? Do you have an on-line working copy and off-line backup and an on-shelf
safety? A typical workflow will look something like this:
WORKFLOW DIAGRAM
You see that little box “reconstruct image”? This is the secret sauce. How fast and how
well can we get the image quality we need for our production from the data file. The
reason cameras record intraframe or uncompressed, which have significantly higher data
rates than interframe even if only removing redundant data, is because our special sauce
is not good enough!