Gamasutra: The Art & Business of Making Gamesspacer
arrowPress Releases
October 31, 2014
PR Newswire
View All
View All     Submit Event





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Bringing VR to Life
by Peter Giokaris on 01/16/14 08:38:00 pm   Expert Blogs   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

Bringing VR to Life

“On the other side of the screen, it all looks so easy.”

- Kevin Flynn, in Tron (1982)

I fondly remember watching Tron as a child. I was overwhelmed by the potential for endless beauty that could be generated by a computer. I kept dreaming of the day I would live in a world of synthesized light and sound.

Many attempts have been made since those early days of computer graphics to deliver the dream of virtual reality to the masses. Today, cell phone technology is driving down the cost of all computing elements required for a compelling VR experience. The dawn of virtual reality is finally here.

Something magical happens when we achieve the right balance between head movement and visual feedback. The brain believes that the visuals are solid and real. You truly feel as if you are experiencing an alternate reality. Sensor fusion is a term you may hear more of as VR matures. This refers to the algorithms which take readings from the multiple sensors used to track your head movements and fuse them together to properly align the visual experience. Having our visual sense correlate itself with our head movement seems to give the brain a ‘thumbs up’ to its believability. I think that the power of VR will drive us to correlate more of our physical senses together. For example, audio will be spatially modified with head movement and explosions in the virtual world may stimulate haptic sensors attached to our bodies.

Five Tips for VR Creators

The wonderful thing about virtual reality is that it delivers a unique and intuitive way to interact with a computer. Instead of using input devices that we have been accustomed to over the past 30 years (namely the mouse, keyboard, buttons and joysticks), we will be able to use our heads and eventually our whole bodies to navigate any digital landscape we desire. This will, no doubt, open up a world of possibilities for games and other experiences.

Here are a few things to think about when creating your virtual reality masterpiece:

1. Iterate Quickly on New Ideas

When designing in a new medium like VR, the ability to iterate quickly is highly desirable. The reason is that there are many aspects about the virtual world that do not seem apparently wrong until you step inside the virtual environment itself.

For example, one important aspect to take into account is the scale of your world. On a 2D screen, much can be forgiven between the geometric relationships of 3D objects in a virtual world. As soon as we dive into the same world with a VR display, those relationships become more important. Furthermore, simple textures that worked on a 2D screen may appear too grainy and low-resolution in VR, becoming noticeable and possibly distracting.

Being able to iterate on key features that are most important to your overall design and still keep the world highly optimized is a challenge in today’s games and will prove even more so in VR. Using a middleware tool like Unity for quick experimentation enables developers to rapidly iterate to discover best VR experiences. Knowing what doesn’t work in virtual reality will be just as important as knowing what does work. All of this knowledge can only help advance VR into a digital frontier that we can all use to create amazing experiences that were simply impossible until now.

2. New Mechanics for New Input Devices

VR is in its infancy. Aside from head orientation, there is no standard set of interfaces specifically used for VR navigation. We rely mainly on gamepads, mice, and keyboards to select items and for locomotion. Fortunately, there are some unique hardware interfaces currently available which can be used to prototype new VR mechanics today.

The original Microsoft Kinect has been around for quite some time now. It uses structured light (specifically, an infrared laser and CMOS sensor) to create a depth map. This in turn can be used to extract geometric relationships of various objects in the real world, including a player’s skeletal structure. However, its high latency and low resolution mapping are not ideal for truly immersive VR experiences.

The Leap Motion is another device that works off of infrared light and multiple cameras. It is meant for tracking a user's hand movements, and although it has a higher capture resolution than the original Kinect, it has a much smaller range (roughly a hemisphere volume with a radius of about 1/2 meter).

Another device, the Razer Hydra, uses a weak magnetic field to detect absolute position and orientation of both player’s hands. Many excellent demos have paired the Razer Hydra and Oculus Rift together, allowing one to combine positional head tracking and hand awareness. However, the device is subject to magnetic interference and global field nonlinearities, and this can drastically affect the VR experience.

I expect everything will change as VR matures. Highly accurate position and orientation tracking of your head will become standard. After that, hand tracking, and eventually full body awareness (with haptics) will follow. These interfaces will use a myriad of inertial, optical, magnetic, ultrasonic and other sensor types to hone in and fine-tune VR input devices.  It will be exciting to see how these new interfaces are used, particularly when coupled with novel experiences which have not yet been explored.

3. A Few Words on Latency

Of all the issues that need to be solved for VR to be believed, latency is one that takes the most precedence. Ideally, the “Motion to Photon” latency (that is, the time your head moves to the time it is represented on the display) should be no greater than 20 milliseconds. Above that, the experience does not feel nearly as real.

On the Rift, the sensors are read and fused together at a very high sampling rate of 1000Hz. All of this happens in 1 millisecond, and the latency incurred by USB is 1 millisecond. However, there are quite a few software and hardware layers sandwiched between the reading of the head tracking sensors and the resulting screen render. Each layer (such as screen update, rendering, pixel switching etc.) incurs some latency. Typically, the total amount of latency is 40 to 50 milliseconds.

“Motion to Photon” layers that may incur latency

Fortunately, much of this latency can be covered up by predicting where your head will be. This is achieved by taking the current angular velocity of your head (found from the gyroscopes of the tracker) and projecting forward in time by a small amount to render slightly in the future so that the user perceives less latency. The Oculus SDK currently reports both predicted and unpredicted orientation, and the predicted results are quite accurate.

An excellent blog post written by Steve Lavalle (Principal Scientist here at Oculus) on VR latency can be found here: http://www.oculusvr.com/blog/the-latent-power-of-prediction/

Using prediction to minimize latency should not be the sole solution to rely upon. As VR systems improve, the layers that were mentioned above will begin to incur less time in the latency pipeline. High frame-rate OLED displays, running at 90Hz or higher, will have an immediate impact on keeping latency low. It will be exciting to see the future technical breakthroughs which will be used to tackle this very important VR issue.

4. Keep the Frame Rate Constant

Equally as important as latency reduction is keeping the visual frame-rate both consistent and as high as the hardware will allow. The LCD panel in the Rift Development Kit refreshes at 60Hz. A drop below 60Hz typically translates into a ‘stuttering’ image, and the effect is jarring enough to hinder the VR illusion.

Stuttering from a fluctuating frame-rate is different from another kind of visual anomaly called ‘judder’. In the context of virtual reality, judder refers to the visual artifacts (perceived smearing and strobing) that are expressed due to the shortcomings of synchronization between the display panel and head / eye movement. Think of it this way: As you move your head around, the image that is displayed on the screen is fixed to a single image for a frame (at 60Hz, that would be approx. 16.7ms). However, your head is still moving within that frame; the eyes and other senses are still integrating together at sub-frame accuracy. When the next frame is rendered, you were staring at the previous frame for a short duration before it pops to the new frame. This constant popping from one frame to another while your head is moving around is what causes judder to occur.

Judder can be corrected, but this requires improvements to be made to display panels used for VR. A great blog post by Michael Abrash talks about judder in more detail: http://blogs.valvesoftware.com/abrash/why-virtual-isnt-real-to-your-brain-judder/

5. Smashing Simulator Sickness

Within the inner ear of humans (and most mammals) lies what is known as the vestibular system. This system contains its own set of sensors (different from the five external senses we all know about) and is the leading contributor to our sense of movement. It is able to register both rotational and positional acceleration.

When we turn our heads, the vestibular system senses rotational acceleration. This sense fuses with the images we are seeing in a heads up display (such as the Rift) and the result is a convincing virtual reality experience.

However, when one of the senses in the vestibular system does not line up with what is being seen, the brain ‘suspects’ something is not right. The result is a potential visceral response to the experience. This sensation is what we refer to as ‘simulator sickness’.

Many possible situations can trigger this sensation to happen. One such case is when the user moves their physical head around but the virtual head position does not get visually tracked. The result is that the positional sensor in the vestibular system is being stimulated but there is no correlation with the visual component. Another is when the user is moving through the environment unexpectedly using an external controller. Here, the visual display may be rotating and moving without corresponding stimulation of the vestibular system.

Simulator sickness can be reduced by keeping the dissonance between the vestibular system and the visual display to a minimum. For example, keeping the user stationary while looking around the scene is a relatively comfortable experience, because the visual and head movements are almost perfectly matched. However, many experiences will require the user to freely move around an environment. Using non-intrusive forms of stimulation (such as audio, visual cues, and haptic interfaces) may give us the ability to reduce and possibly remove simulator sickness entirely. We might even be able to mask simulator sickness by actively engaging the vestibular system to control movement; we could locomote or rotate the player simply by tilting the head. It is this particular area of virtual reality research where experimentation and knowledge sharing will become vital.

A Brave New (Virtual) World

We are at the start of an exciting era in human-computer interaction. Although the current goal is to fuse computer visuals with head motion, it does not stop here. As virtual reality becomes more mature, our other senses will be tapped into as we start to take ‘physical’ form in the virtual worlds we are creating. The ability to use our hands to actually build from inside our virtual creations will no doubt bring forth experiences that we cannot currently imagine. Eventually, our avatars will begin to solidify within the virtual space, and jumping between the physical and virtual world will feel like teleportation.

I, for one, cannot wait to finally be transported to the Grid.


Related Jobs

Activision Publishing
Activision Publishing — Santa Monica, California, United States
[10.31.14]

Tools Programmer-Central Team
Amazon
Amazon — Seattle, Washington, United States
[10.30.14]

Sr. Software Development Engineer - Game Publishing
Intel
Intel — Folsom, California, United States
[10.30.14]

Senior Graphics Software Engineer
Pocket Gems
Pocket Gems — San Francisco, California, United States
[10.30.14]

Software Engineer - Mobile, Backend & Tools






Comments


jin choung
profile image
what i'd really love to see is an OS desktop that basically surrounds me... would love that much desktop real estate to place windows and menus all around and i can turn my head and look up and down and all of that is just areas that can hold my work and create the kind of clutter that i do in real life where it may look cluttered but it makes sense to me and i know where everything is. a lot of us have dual monitor setups now but a vr desktop would make our current setups look like postage stamps!

i can totally imagine a windows 9VR or mac osXVR or debianVR.

but in that situation, i would imagine that mouse and keyboard will be very important to getting any work done. what i'm imagining is a leap sensor being integrated with the keyboard and mouse so that i can see a half (or lower) opacity representation of my fingers relative to keys and mouse... and then i can lift my hand from keyboard and use a pinch and pull gesture to rotate around my 360 "desktop" space instead of actually spinning around in a chair to look behind me.

ooo... and i guess a good 360 spherical wrapped desktop background would be a good way to orient yourself and get your bearings (oh yeah, my browser windows are by the eastern lake shore).

right now, a lot of attention is being placed on gaming and simulation but this can be huge for just everyday computing. i think some of the impetus gets lost because there are lots of impractical fanciful representations of "cyberspace" where you're zipping around surrounded by neon... but i just want a fully 3d, fully encompassing workspace where i can use spatial memory to navigate to my work windows. 3d space isn't necessary to surf the web (sorry william gibson) but again, just for the desktop, spacing out different windows all around me would be awesome.

hope somebody's on that!

Tim Kofoed
profile image
Yeah, I agree.

That was also one of the first things I imagined. a 360 degree spherical desktop, where you can move the spherical desktop by extending your hand and dragging it, as well as being able to "enter" windows to see another sphere (folder navigation)... or dragging each corner of the "window" to expand it so that you can see inside it.

...but unfortunately, I don't have the necessary experience in C++ to actually make a desktop UI.
I'm making a VR game part-time in Unity, so I could make a mock-up in Unity in a short time using the Oculus Rift and Razer Hydra as input devices... but so could many others out there ;)

If someone is working on it, then please link your progress!
If no one is doing it (which means someone is doing it in secret), then I don't mind getting the ball rolling by making a quick mock-up, but I don't have time to do much on it.

Matt Marshall
profile image
The thing with that would be input. Keyboard and mouse, even hydra, wouldn't cut it. The OS needs to have word processors (dictation?), typing (??), design (MSpaint through to photshop etc (arm control?)...

The best and only way would be to somehow ceatre a control scheme that can do ALL of these things relatively easily.

Personally I don't hink anything short of FINGER recognition would suffice...and we still seem a ways off that unless you think of LEAP perhaps...but that doesn't really cover some of the other inputs...

SOmething spatial...as in OPEN spatial...would be best, othwerwise you would have to always recalibrate where you are in real space...

That all said...it WOULD be awesomwe and I'd LOVE to be part of that.

Matt Marshall
profile image
ALso, if the input has the ability to overlay FAKE hands, or use a camera in front that masks your hand out...then you have more options...I think this would be the best way. THere is nothing worse in VR than having a body that doesn't move. What was worse actually, was having a body that was a little weird (the moon landing game has a stubby thumb, and it's WEIRD!)

Many possibilities...and I think this would have to proabbly come around on the 2nd/3rd generation of VR tech.

My main worry is too many people getting on the same train...you can even see it now. STEM from Sixense, Leap, Hydra, OMNI, etc... Personally we need a universal solution that everyone can move forward with otherwise we'll just get a lot of wasted time with developers working on one, all, or none...

Tim Kofoed
profile image
True, the oculus/hydra setup won't work.

As Jin suggests, It would have to be at minimum an HMD, keyboard, mouse and a stereoscopic camera looking at your hands, keyboard and mouse in order to create an augmented reality overlay within the virtual environment.

You don't actually need to do finger recognition though. You just need to let the user see both his hands and keyboard/mouse.
I don't think you -need- voice control, even though that might help in some cases... but if you can see your hands and your keyboard/mouse, then I don't think voice control is necessary.

The Hydra is not worth much in such a setup... so any setup I could mock-up wouldn't be worth much without the actual interface... Anyone else, with a stereoscopic camera, who want to make a mock-up?

I believe it should be an add-on to existing OS's instead of another OS. This is just a UI change, so there is no need to make a new OS.
I don't know how much access you can get to a Windows/Mac/etc., but I suspect both Windows and the various Linux OS' could give a VR-UI-program the info and access it needs to work.

I imagine current OS windows as 3D billboards in a sphere. As such, the current windows could be reused as is.

James Yee
profile image
Ghost in the Shell much?

Though I totally agree that's what I want too. Though I wonder how much more.. productive you'd be in a virtual desktop (desksphere?) versus traditional setups. Though having that much real estate has got to be nice. :)

Tim Kofoed
profile image
Honestly, I don't know if it would be more efficient. Maybe, maybe not... but I suspect it would be more fun. :)

...and when we have those options; new interfaces, experiences and opportunities will appear. Maybe we'll find ways to make things more efficient, or we'll learn how to manage real space... or make the Matrix/Holodeck v0.01 :P

Who knows?

I doubt the inventors of the internet could have foreseen its eventual impact on the world.

Stephen Horn
profile image
I'm not sold on the "sphere" notion - the non-Euclidian geometry introduces mapping and movement problems that should be familiar to anyone who has programmed for 3d games, and limits the desktop area to (4)*(pi)*(r^2) units, but I'm totally sold on the idea of an infinite desktop plane. Combine with a leap motion controller to sense a few simple hand gestures for moving and scaling the desktop plane and windows within it (let's say a hand with all fingers outstretched moved the desktop, two hands scales the desktop, and one finger moves a window, with two fingers resizing a window), and it seems like I don't really need a traditional, physical monitor anymore.

I don't know where to start, though. How would someone incorporate this into Windows properly? Does it replace Windows Explorer? Does it act like a display device, which happens to capture what would have otherwise rendered and re-render it to project it onto the 3d virtual desktop? Can you get a window to render when the OS thinks it's off-screen?

Maybe Linux would be a better place to start for something like that. At least in Linux you have access to the source code for everything.

Ruud van Gaal
profile image
Why not something like Google Glass which overlays onto your real vision? So you could see your real hands, but see your working environment more or less opaque. Would be much easier on the eyes, and you can always stare at a white (or black) board which then unclutters the 'noise' that is in front of you.

Travis Fort
profile image
I have a hard time visualizing the Oculus Rift as a true "input device" versus a new display system. Especially since most input devices use heavily repeated actions (button presses, small adjustments when moving) that would quickly become uncomfortable if you had to mimic that using head movements. There would definitely have to be some innovation to how input systems work if the Rift were to be used for that purpose.

Tushar Arora
profile image
I am so looking forward to the release of the MYO device. I think that form of input will suit the oculus better than how the hydra has been complementing the device all this while. I hope it works as well as its promo videos!

Theresa Catalano
profile image
This article seems very premature. Especially for a piece of hardware that could easily end up being the next Virtual Boy.

Karl E
profile image
Have you tried the device in question? Otherwise it's about time. Try it with a well-calibrated Hydra. It's cool.

Sure, it is premature to talk about VR as an existing thing... from a consumer perspective. But this is not a site for consumers, it's for people who make games. And thousands of developers are already making games for the Rift. Their commercial success remains to be seen, but sometimes it's nothing ventured, nothing gained.


none
 
Comment: