Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 30, 2014
arrowPress Releases
October 30, 2014
PR Newswire
View All
View All     Submit Event

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

3D Glasses: 11x Overhype
by Alexander Jhin on 03/25/10 07:19:00 am   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


While 3D-enabled TVs dominated the Consumer Electronics Show, Avatar 3D ruled the box office and Nintendo just announced the newest 3D version of the DS, the reception of the resurgence of binocular 3D technology has been underwhelming.

Yes, historically this technology has been poorly implemented (anyone remember the Virtual Boy?) and many complain of the added eye strain. But the real reason nobody cares about binocular 3D is that the spatial information provided by 3D glasses is psychologically minor, like the difference between 5.1 and 7.1 surround sound systems.

When it comes to visually perceiving three dimensions, psychologists identify thirteen depth cues. Eleven of these cues are monocular, meaning they only require one eye to perceive spatiality. Because 3D glasses only add a sense of spatiality by providing different images to each eye, monocular cues are available even without 3D glasses.

This explains how people with one eye can still function successfully in a 3D world and why we have no problem perceive 3D space when watching TV or looking at a perspective painting. Humans have spent centuries perfecting the portrayal of these monocular cues from the renaissance in perspective drawing to the magic of  old school computer 3D projection graphics produced by projection matrices, Z-buffers and other techniques. Needless to say, humans have already mastered the art of reproducing monocular cues well before 3D glasses.

Of the remaining two binocular cues, only one of them works with 3D glasses. The binocular depth cue “convergence” only occurs by the physical sensation of both eyes angling differently in order to focus and account for an object’s distance from the face. Since a viewer with 3D glasses is still looking at a flat screen which is a consistent distance from the retina, this cue doesn’t work with 3D glasses.

Thus, the only binocular cue that 3D glasses actually provides is stereopsis or the disparity of images on the retina due to differing viewing angles of each eyeball. This cue doesn’t function for objects that are far away as the difference in images decreases over distance. Adding one minor cue on top of 11 existing cues is not much of an improvement.

3D glasses are even worth less when we acknowledge that the main driver of 3D spatiality is the brain. Psychologists have recently placed more and more emphasis on the brain’s role in determining dimensionality – to the point that they’ve discovered that the brain fakes binocular 3D vision even for parts of the vision that are only viewable by one eye.

Add to this the fact that many precision jobs, for example threading a needle or firing a weapon accurately, are actually easier using monocular vision than binocular vision, we begin to understand why nobody much cares about 3D glasses.

Comparing binocular 3D technology to monocular 3D technology is like comparing a 12 channel surround system to an 11 channel surround system. Sure, it’s cool to geek out about the newest tech, but honestly, do you much care if you can’t much tell the difference? At best 3D glasses are a minor evolution roughly 1/12th better than monocular 3D. I’d prefer that developers spend more time on other things.

Related Jobs

Intel — Folsom, California, United States

Senior Graphics Software Engineer
Grover Gaming
Grover Gaming — Greenville, North Carolina, United States

3D Generalist / Artist
Gameloft New Orleans
Gameloft New Orleans — New Orleans, Louisiana, United States

Lead Programmer
Demiurge Studios, Inc.
Demiurge Studios, Inc. — Cambridge, Massachusetts, United States

Lead System Designer


Merc Hoffner
profile image
While it's a strong argument from the psychologists (and the advent of of fmri has backed them with strong evidence in recent years), there's of course a giant counter argument from the field of evolutionary biology. Having a second eye (or more) is generally a massive energetic expenditure, both in physical growth and operation, but also processing. A vast number of organisms obviously use additional eyes to confer an extended field of view of their environment, but a similarly vast number of organisms use a second eye precisely for depth perception. If stereo measurement (and triangulation) weren't conferring a major perceptual advantage then I would guarantee that two eyes would have been evolved out for far more organisms right quick.

Besides, we the real viewers (and might I add consumers) obviously took something positive away from Avatar. Most of us believe we saw something extra even if a psychologist tells us we didn't. It's a fool who ignores someone else's 'ignorance', and it's a bigger fool who makes products but ignores markets.

I'd prefer that developers spend less time entrenching their personal dissatisfactions and more time listening to the consumers doing the consuming.

Prash Nelson-Smythe
profile image
This is why I've always thought that this type of 3D tech is only really useful for objects closer than a couple of arms lengths.

Is it the dissonance between the two binocular cues that causes dizziness/bluriness with these technologies? By this I mean, you have stereopsis (image difference between eyes) in action so you use convergence (angling your eyes) to try and observe the nearby object, but there is only the thin air in front of the screen to focus on and this eye movement only serves to blur the image. Or are the relative positions of the two images on the screen adjusted to account for this. If they are adjusted, would there not be an optimal distance from the screen for a given screen size to reduce bluriness?

I think you're right about some of the limitations of 3D and the general underestimation of the extent of monocular depth perception. This is clear from the fact that you can get most of your depth information with just one eye. I imagine that observed parallax effect combined with head movement information from cochlea/proprioception produces a lot of subconcious depth information (perhaps this could be leveraged in the future?).

Also, there are also health issues to be considered with long term use. We already know that looking at screens for long periods adversely affects eyesight. I would imagine binocular 3D technologies would have an even greater effect and perhaps confuse depth perception.

By the way, I don't believe it has been officially confirmed by Nintendo that they will use a 3D technology which relies on separate images for each eye in the DS successor.

Ian Fisch
profile image
I think you're ignoring the fact that not all monocular cues are available in videogames either. For instance in real life, even with one eye, you can tell the depth of an object by whether or not it's in focus. This isn't something videogames mimick.

So it's not really 1/12 better with 3d glasses since not all of the other 12 cues are currently available.

Prash Nelson-Smythe
profile image

I count 5 monocular cues that that are fully used by normal 2D screens (depending on the quality of the graphics engine). A couple more are partially used, and one of them (peripheral vision) can be used on an IMAX screen.

I don't think there's much point really quantifying an improvement in depth perception and clearly the overall impression of the end user is the most important. One thing I do wonder though is whether the depth perception would be robust or reliable enough to become an integral part of the game mechanics. Obviously, the binocular cue will only be used in additional to the monocular cues anyway so you can't fully separate the effects. However, I don't feel that game mechanics could rely on it. For this reason I can imagine a situation where after the novelty gives way to familiarity you might be left wondering "Can I switch this 3D off now and get on with playing the game comfortably?"

Alexander Jhin
profile image
@Merc -- While I agree that to some extent the "consumer is always right," I think that generally, consumers are unimpressed with binocular 3D glasses. But time will tell. I should also mention that many animals evolved two eyes balls not for binocular depth perception but simply because it increases field of view. Most prey animals have eyes on opposite sides of the head, which don't provide any binocular depth perception as the fields of view for each eye don't overlap. These animals rely solely on monocular visual cues even though they have two eyes.

@Prash -- Great point about proprioception + vision. Head movement with proprioception worked beautifully with Johnny Chung Lee's Wii head tracker. That demo felt very 3D, even though it only used monocular cues + proprioception.

@Ian & Prash -- You're right: not all monocular cues are used for all games. I recounted and count 10 of 11 possibly being used in games: Parallax, depth of motion, perspective, relative size, familiar size, aerial perspective, occulusion, peripheral vision (fish eye), texture gradient, lighting and shading. Though not all games will use them all. Accomodation is not achievable using a monitor.

Merc Hoffner
profile image
Yes, I did mention. But most predators do use binocular vision, many to a great effect for precision strikes. Last I checked humans were more predator than prey, and videogames are as much about spatially precise strikes and manipulation as they are about situational awareness. If all those predators (and our ancestors) have benefited then I'm sure we will too. However it is unfortunate there aren't enough tool using animals to currently judge the evolutionary significance of binocular vision on fine complex environmental manipulation. Besides, if we're going talk somewhat scientifically about the unimportance of stereoscopy in human vision, then we should really extend the argument to the unimportance of colour precision, or even of resolution. I'll guarantee that if developers didn't spend all that energy making their assets in high resolutions then games would be cheaper, systems would be cheaper and the images wouldn't look any less 'real'.

As a side note, I'm hearing rumours that Nintendo is using both lenticular/parallax display for glasses free 3D in combination with eye tracking, meaning the rendered images can be perspective corrected. I think being able to 'look around' something 3D will add a rather nifty novelty factor worth a fair shake. I mean, we deride novelty features in video games, but at the same time plenty of people are willing to spend plenty of money on a 'single image' crystal laser sculpture ornament. If we're talking near enough holographic surfaces - one of the staples of futurist visions, it would be cruel to nip it in the bud and something many have been waiting decades, nay, lifetimes for. Having said that leveraging the tech from the games industry as a kind of programmable diorama is a bit awkward, but strangely I personally can't wait.

Stuart Evans
profile image
To say that addition of the stereopsis effect only results in an image 1/12 better (or 1/11, however many depth cues we decide have already been implemented) would require that each depth cue contributes "equally" to the perception of depth.

It may well be (probably more likely) that some depth cues contribute more and some less. While I wouldn't comment on which depth cues might be more important than others I would say that no matter how many other depth cues there are (even if there are millions) their existence alone does not dilute the importance of a single depth cue. So it is wrong to use the existence of 11 other depth cues as a metric for the importance of one.

I may not be understanding what you mean by "convergence" as this physical effect of moving the eyes to focus on closer objects does happen when using 3D glasses. Objects meant to be closer to you are set at a greater offset in each eye's image while objects further away are in a more similar position. In order to look at the closer object you must move your eyes further together and further apart for the distant objects.