Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
September 19, 2014
arrowPress Releases
September 19, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Q&A: Oculus' software guru sheds light on the new SDK driving DK2
Q&A: Oculus' software guru sheds light on the new SDK driving DK2 Exclusive
August 1, 2014 | By Alex Wawro

August 1, 2014 | By Alex Wawro
Comments
    3 comments
More: Console/PC, Programming, Production, Exclusive



VR developers, take note: if you built your game for the initial Oculus Rift prototype, you've got some work ahead of you to get it up to snuff on the latest development kit.

Oculus VR recently started shipping the second iteration of its Rift headset development kit, better known as DK2, and developers have already started publishing preliminary guides to making games that take advantage of newly-added components like an HD display and integrated latency tester.

At the same time, Oculus shipped a significantly updated version of the Rift software development kit with new features, including a dedicated Rift display driver and a positional tracking system. The company actually made a show of delaying DK2's ship date a week or two in order to fix some issues with the new v. 0.4 SDK.

Gamasutra recently caught up with Oculus' chief software architect Michael Antonov via email to learn a bit more about how and why the Oculus SDK has changed, and what it means for developers of VR titles. Here's an edited transcript of our conversation.

What's new in the redesigned SDK, and what inspired those changes?

MA: The latest Oculus SDK is the result of over a year of engineering focused on providing developers with everything they need to build ground-breaking, consumer virtual reality experiences using the second Oculus development kit, DK2. Three of the most significant improvements are the addition of positional tracking, an Oculus display driver, and a new C API.

The positional tracking system relies on computer vision based tracking of infrared LEDs within the headset. Implementing robust optical tracking and vision-guided sensor fusion has been one of the most challenging projects at Oculus. A huge amount of research and development has been dedicated to eliminating jitter, keeping positional latency low, and handling corner cases such as when the user moves outside the field of view of the camera.

The new Oculus display driver should make the Rift significantly easier to develop for and use. With earlier hardware, the Rift’s screen was configured to either duplicate one of the monitors or extend the desktop. The orientation of the screen can also created additional challenges. The driver addresses these concerns, allowing applications to render directly to the Rift without it being a part of the desktop. It also supports mirroring Rift rendering to another window. The driver will continue to evolve as we gather more feedback from the community.

The Oculus C API was originally introduced with the 0.3 SDK release and was updated to support positional tracking with 0.4. The purpose of this API is to provide a simple, straightforward interface to hardware that hides many of the details and is easy to bind to from programming languages and engines. Although we still expect it to evolve, the API is getting closer to a point where it can be packaged up as a DLL or a shared library.

Why did Oculus announce it was delaying shipping DK2 units to do more work on the SDK? Why not, for example, just ship the units out when they were ready and push the SDK update live when it's ready?

The earlier 0.3 Oculus SDK code branch wasn’t truly ready for DK2. It didn’t include the display driver or the service model that make the headset significantly easier to use. Manual portrait display management would’ve led to developer frustration, plus the creation of applications that rely on old display setup.

With the 0.4 SDK and runtime nearly ready, we needed that extra week to improve its stability and robustness. Shipping the SDK alongside the hardware meant that developers will have a better out-of-the-box experience. We pulled in the schedule on 0.4 to bring the huge improvements to DK2 right at the launch, and we needed the extra time to stabilize the newest features.

Can you give me some clear examples of how developers can make good use of those new features?

To enable positional tracking, Oculus SDK reports head pose as combination of orientation quaternion and a 3D position vector in space. In earlier versions of the SDK, translation was computed solely based on the head model; starting with DK2 it includes correct positional data while within the tracking volume. It should be easy to apply this tracking data to the camera view in most game engines, allowing players to move around in 3D space.

Translating in 3D virtual space is, however, the easy part of the challenge. Next you’ll need to figure out how head translation interacts with game scenery and engine mechanics. What happens, for example, if the user pushes their head through a virtual wall? Or moves out of the camera tracking range? Sergio Hidalgo discussed some of the challenges related to positional tracking in his article “VR: Letting Go of the Avatar.” One option for handling walls is to fade out the screen until the player moves back into a known space, but more elegant solutions may be waiting to be discovered.

Beyond first-person experiences, positional tracking provides a new dimension of input for developers to explore. While a handful of very new experiences like Luckey’s Tale and Superhot have highlighted some of these new possibilities, I’m excited to see what the broader Oculus development community comes up with once they have a DK2 and the new SDK.

I’m also looking forward to seeing developers begin to leverage the new display driver. From an engineering perspective, it’s quite easy to use: just create a window whose swap chain matches the resolution of the Rift and call ovrHmd_AttachToWindow on it. All of the swap chain output will show up on the Rift. Having the output redirected from a window does, however, open up the possibility of using the window surface for other things. Besides mirroring, it could potentially be used to display a third-person view or game statistics an external observer.

So how has your work with engine makers like Unity and Epic evolved, and how is that reflected in the new SDK?

Our relationships with Epic and Unity have really grown from when we started the company in 2012. For example, with the latest Unreal Engine 4 release, Epic has actually integrated the Oculus SDK into the main codebase so that it works out of the box. They’ve also collaborated super closely with us on the core integration, major improvements and feature additions, QA, developer support, and even new samples and demos that ship alongside the engine.

The engine integration work has a profound impact on the SDK. On more than one occasion, we’ve modified the API and implementation to account for different engine aspects related to stereo rendering, multithreading, and state management. These changes have made the overall SDK more robust for integration into the hundreds of proprietary engines around the industry.


Related Jobs

Insomniac Games
Insomniac Games — Burbank , California, United States
[09.19.14]

Senior Engine Programmer
Nexon America, Inc.
Nexon America, Inc. — El Segundo , California, United States
[09.19.14]

Front-End Developer
Machine Zone
Machine Zone — Palo Alto, California, United States
[09.19.14]

Project Manager
Fun Bits Interactive
Fun Bits Interactive — SEATTLE, Washington, United States
[09.19.14]

Senior Engine Programmer










Comments


Kevin Fishburne
profile image
I don't know much about the OR, but if the device accurately and precisely reports its inputs then wrapping your own code around those, even if you had to create your own interpolation algorithms, would be trivial from a technical perspective as long as drivers for the device were installed or otherwise in the kernel. Maybe the SDK makes it easier to incorporate common I/O relationships into games not specifically created to read an OR's feedback?

The OR as hardware is going to be awesome. It will be interesting to see how devs integrate it into their games. Hopefully most digital publishers will disallow apps that use it to induce real terror (shock videos) or provoke real-world violence. Someday someone wearing a headset will accidentally kill someone and argue that he couldn't see and was confused because of the device. We'll all facepalm heartily, but it probably will happen eventually.

Gary Groy
profile image
What a bunch of nonsense. The release of this SDK broke everything and you'd be hard-pressed to find one person speaking up in favor of it -- since it's release Oculus has gone entirely silent and refused to keep us posted on fixes and updates.

"Our relationships with Epic and Unity have really grown"

Gee, maybe you should've told them about this new SDK before springing it on them, then? It sounds like it came as a complete surprise to Epic. I'm probably just bitter because this whole DK2 fiasco is making me look incompetent at work and my boss is incredibly pissed about things. Fix this stuff! Keep us updated!

Benjamin Quintero
profile image
This is what happens when you start out as a college kid with an idea and then get a multi-billion dollar injection just about 1 year later. Exponential expansion will always tear open the cracks, no matter how small they start.

It doesn't really matter how much money you have. In the end, people and time make it happen. They have the people, just not enough time to meet expectations. Occulus is at least another year away from being anything consumer-ready but people were expecting it since their Kickstarter ended a couple years ago.


none
 
Comment: