Did you ever wonder how 3D works? How do images come out of the screen or move behind it? What do the glasses do? Why do some have colors and others look like sunglasses?
3D is fascinating because it
involves trickery of the brain.
Most people have what is
known as binocular vision to perceive depth and see the world in 3D. The separation between our two eyes cause
each eye to see the world from a slightly different perspective. The brain merges these two views together. From the
difference between the two images the brain can calculate the distance of each
object. So, 3D, or “stereoscopy,” refers
to how your eyes and brain create the impression of a third dimension.
A simple way to understand
this principle is to hold your thumb up at arms length and closing each eye
alternately while looking at your thumb.
As you switch between open eyes you should see your thumb “jumping” back
and forth against the background.
You’ll notice that the angle
from which you’re viewing the thumb changes and that you can see different
parts of the thumb depending on which eye is open. In a sense you are seeing
two different images of the thumb.
When you view the thumb with
both eyes you are still seeing two images but your brain makes one image. This
allows your brain to understand that the object has depth.
Now
that we understand why we perceive in 3D, it follows that for any display
technology to be able to trick your eyes into believing that you are viewing a
3D image, it will need to provide a slightly different image for viewing to
each eye via some technological trickery.
In general, for 3D movies and
TV broadcast, the left and right image as captured by a “stereoscopic” camera
is projected or displayed simultaneously, then glasses or filters are used to feed your
individual eyes different perspectives of the same image to create a sense of depth of the real life image. Stereoscopic camera’s are either two cameras mounted on a rig or two lenses on one camera or two cameras that operate out of the same lens, but spit the image inside
the camera using tiny mirrors.
There are several different types of 3D viewing
systems with associated glasses:
Color filter glasses (Anaglyph)
Color filter glasses are one of the oldest methods of
viewing 3D images or movies (first developed in 1853). It is based on the idea of splitting an image into
2 by taking only the red colors of an image for the left and the blue color for
the right eye.
Both sub images, which
show the scene from different perspectives, are combined and displayed on the
monitor or screen at the same time
Again, the system works by
feeding different images into your eyes. The different color filters allow only
one of the images to enter each eye, and your brain does the rest. There are
two color filter systems: Red/Blue and Red/Green. This technique,
however, didn’t allow for a full range of color and had a tendency to “ghost,”
or have the once-distinct images bleed into one another (not to mention it was
more apt to provide headaches and nausea).Polarizing glasses (Passive, not electronic, glasses)
This method is more commonly used in today's 3D movie projections, such as the Real D and IMAX 3D systems. The audience must wear special glasses which have two polarizing lenses which have their polarization directions adjusted to be 90 degrees different. This makes is possible that left eye sees it's picture without problems but everything sent to right eye (sent out at different polarization) seems to be black. Same applies also to right eye. Stereo 3D film theaters use special silver-coated screens that are much better at reflecting light back to the viewing audience.LCD shutter glass (Active, electronic, glasses)
In the LCD shutter glass 3D display, the glasses are synced up to your television and actively open and close shutters in front of your eyes, allowing only one eye to see the screen at a time. The active shutter glasses are maintained in sync with the television set using bluetooth, infrared or radio technology. The LCD’s can be made opaque thus acting as a shutter. The shutters move so quickly that the switching is hardly noticeable . These shutter lenses are made possible because of the refresh rate on televisions. 3D-enabled televisions have high image refresh rates (typically 120 or 240 Hz), meaning the actual image on screen is quickly loaded and reloaded. Through the glasses, you receive one constant image instead of a flicker. The downside is that the glasses are expensive and require batteries.
Where Stereo 3D becomes
interesting is in learning how to manipulate these images on screen for
creative effect. How do you make objects appear as though they are coming out
of the screen towards you or make the actor appear to be in front of an object?
Simply put, your mind has
a number of depth cues. These are signs that tell the brain that there is a
measurable distance between objects. These have been manipulated by filmmakers
for years. Focus is the easiest to understand. If something is in focus and the
objects around it are out of focus then your brain can understand that there is
a distance between these objects.
By altering the distance
between each image we can control how far forwards and backwards they appear to
be. If we move images closer together on screen it forces your eyes to focus as
though they are closer, while if we move these images further apart your eyes
focus as though they’ve become further away. Too much manipulation either way
and the viewing experience becomes very uncomfortable.
The eight depth cues
Humans have eight depth
cues that are used by the brain to estimate the relative distance of the
objects in every scene we look at. These are listed below. The first five have
been used by artists, illustrators and designers for hundreds of years to simulate
a 3D scene on paintings and drawings. The sixth cue is used in film and video
to portray depth in moving objects. However it is the last two cues that
provide the most powerful depth cues our brains use to estimate depth.
Combining depth cues
If many of these depth
cues combine they can offer a very strong sense of depth. In this picture you
will find perspective, lighting and shading, relative size, and occlusion which
all combine to produce a very strong sense of depth in the picture.
1. Focus
When we look
at a scene in front of us, we scan over the various objects in the scene and
continually refocus on each object. Our brains remember how we focus and build
up a memory of the relative distance of each object compared to all the others
in the scene.
2. Perspective
Our brains are
constantly searching for the vanishing point in every scene we see. This is the
point, often on the horizon, where objects become so small they disappear
altogether. Straight lines and the relative size of objects help to build a map
in our minds of the relative distance of the objects in the scene
3. Occlusion
Objects at the
front of a scene hide objects further back. This is occlusion. We make
assumptions about the shape of the objects we see. When the shape appears
broken by another object we assume the broken object is further away and
behind the object causing the breakage.
4. Lighting and shading
Light changes
the brightness of objects depending of their angle relative to the light
source. Objects will appear brighter on the side facing the light source and
darker on the side facing away from the light source. Objects also produce
shadows which darken other bjects. Our brains can build a map of the
shape, and relative position of objects in a scene from the way light falls on
them and the pattern of the shadows caused.
5. Color intensity and
contrast
Even on the
clearest day objects appear to lose their color intensity the further away that
they are in a scene. Contrast (the difference between light and dark) is also
reduced in distant objects. We can build a map in our minds of the relative
distance of objects from their color intensity and the level of contrast.
6. Relative movement
As we walk through a
scene, close objects appear to be moving faster than distant objects. The
relative movement of each object compared to others provides a very powerful
cue to their relative distance. Cartoonists have used this to give an
impression of 3D space in animations. Film and television producers often use
relative movement to enhance a sense of depth in movies and television
programs.
7. Vergence
Vergence is a general term
for both divergence and convergence. If we look an objects in the far distant
both our eyes are pointing forwards, parallel to each other. If we focus on an
object close up, our eyes converge together. The closer the object, the more
the convergence. Our brains can calculate how far away an object is from the
amount of convergence our eyes need to apply to focus on the object. Film and
video producers can use divergence as a trick to give the illusion that objects
are further away, but this should be used sparingly because divergence is not a
natural eye movement and may cause eye strain.
8. Stereopsis
Stereopsis results from
binoccular vision. It is the small differences in everything we look at between
the left and right eyes. Our brains calculate which objects are close and which
objects are further away from these differences. The example we used earlier of the “jumping
thumb” is a demonstration of stereopsis.
Glassless Television Displays (Autostereoscopic)
Autostereoscopic is
essentially any display that does not require glasses to view the image in
3D. The Nintendo 3DS,
Nintendo’s newest portable 3D gaming device, is one such device. One of its
tricks is syncing a lenticular display with its forward-facing camera. This method relies on a display coated with a lenticular film. Lenticules are tiny lenses on the base side of a special film. The screen displays two sets of the same image. By using
eye recognition, it can track where the user’s face is and shift the display to
accurately display 3D no matter how the user views the screens.
Autostereoscopy will develop
on handheld devices before it heads to large format screens. Other “glassless” products for 3D
include mobile phones, laptops, cameras and camcorders.
Autostereoscopy relies on the use of special optical elements between the television screen and the viewer so that each eye of the viewer receives a different image thus producing the illusion of depth. This can typically be achieved in flat panel displays either using lenticular lenses or parallax barriers.
One of the downsides of both lenticular and parallax-barrier screens is if you move your head, or get too close or far away from the screen, the effect breaks down. Displays like this work reasonably well in portable devices like the Nintendo 3DS and Sony's TD10 because their screens are small, but scaling up is very expensive.
Another issue is that both lenticular and parallax-barrier screens reduce overall image resolution, which has unpleasant consequences for the image quality of 2D footage. To compensate, any future big-screen autostereo TVs will need to have a much greater resolution than today's HD models.
3D Production Challenges
3D production techniques and
the associated complexity would also require another long post.
The stereographer is a new vocation in film, television and video games production. This person will monitor material from one or more 3D camera rigs and check that the 3D image is correctly aligned and positioned in the 3D space. The stereographer will also ensure that the 3D image is kept within the allocated depth budget throughout post-production.
Suffice it to say it is easy to produce “bad” 3D. Two proponents of producing “good 3D” and evangelizing on the techniques to do so are Steve Schkair and Vincent Pace.
The stereographer is a new vocation in film, television and video games production. This person will monitor material from one or more 3D camera rigs and check that the 3D image is correctly aligned and positioned in the 3D space. The stereographer will also ensure that the 3D image is kept within the allocated depth budget throughout post-production.
Suffice it to say it is easy to produce “bad” 3D. Two proponents of producing “good 3D” and evangelizing on the techniques to do so are Steve Schkair and Vincent Pace.
Click on this link for an interesting video interview
with Steve Schklair who talks about what difference filming the Hobbit 3D
at 48 frames per second will make, new 3ality Digital technology to enable fast
3D cutting, a large 3D movie project in Russia and how 3D technology has
changed since 3ality Digital’s U23D production.
No comments:
Post a Comment