The way that they make back the money on the consoles is charging the publishers, in general. With direct digital, it's a rev share, with retail, it's manufacturing costs and stuff. You don't have exactly the same model in mind.
PL: Exactly. What I'm saying is we don't want to mimic that model. It was just an example of how, being able to bring those costs down, they're able to move so many units. If they were selling you an Xbox for what it actually costs to make on day one, then a lot fewer people would be purchasing them.
So essentially you're saying, without being specific, you want to innovate around the business and not just rely on selling hardware to make back your money.
PL: We don't want to sell things at a loss. We want to make sure we can make money if we only sell hardware. At the same time, we can have a lot thinner margins if we can find other ways to make money. People have been speculating -- "Oh, what does that mean? Does it mean that there's going to be a subscription? Does it mean that you're only going to be able to get it if you buy it with my Comcast internet, or something?" But really, we don't know. We're looking at all different options. But we're not going to do anything that locks out certain markets, or anything, or requires you to pay a subscription.
You spoke in your talk about how you're hoping to help solve issues around creating VR, best practices and stuff. Is it going to be in the form of best practices, or are we going to see SDKs that help with judder, and things like that? Or a mixture?
PL: It's a combination. What we're trying to do is take as much of the hard stuff as we can -- we're leaving raw access, for everyone who wants it. People can get the raw data from our sensor and try to do sensor fusion themselves. We want to leave that open.
But for the vast majority of developers, we're trying to take all of the hard stuff and put it into the SDK and do it ourselves. So things like judder compensation, reducing motion blur, high tracking precision, doing positional tracking, inverse kinematic models -- we really want to make it as easy as possible for developers who don't understand all of the technical side of that to make games for VR. So there will be a lot of stuff in the SDK. So as we make our SDK better, it will be easy for them to make good experiences.
But some things we can't really put in the SDK. If someone wants to make a fighter jet game where this person just spins and barrel rolls continuously, that is probably going to make people sick, and that can only be solved by a best practices guide where we say, "Hey, that is not the best idea to do in virtual reality."
Or so many first person shooters have this interaction where you go into something, you run into it, you start shooting at it, and you start running backwards until all of your bullets are gone, and you hope that you've killed it before you run out of bullets. That's probably the most common interaction in first person shooters.
In VR, and actually in real life, running backwards is not a really very comfortable feeling, especially if you're strafing side to side as you run backwards, and especially if you're moving at 20 miles per hour, like you do in so many FPS games. That's something that can only be helped by a best practices guide and people figuring out what works in VR and what doesn't.
It sounds like you're really concerned not with just shipping the product but getting people to understand how they should approach the medium, so to speak.
PL: Exactly. Because it's so different from making games for PC. A lot of the things do apply, but so many PC games, if you do just take them and port them over to VR, there are a lot of things just innately wrong with those games in virtual reality that can't be fixed just by tweaking the SDK or reducing judder. They're things that are fundamental to the nature of the game.
One thing you spoke about during your GDC talk is that the reduction of latency is a huge priority on your end -- you want to get as much latency out of the loop as you can on the hardware side, right?
PL: That is absolutely true. And actually, the stuff that we're doing in the lab right now, we think that we've got latency basically solved. We think that, for the consumer launch, we're going to be able to get latency to the point where it's not even an issue -- it's a completely nonexistent issue, completely beyond the level of human perception.
So it is a really hard thing to solve. We think, on our side, we're going to be able to get the latency down to next to nothing. Where the difficulty is going to remain is with game developers, and how they do buffering in their engines, how they do vsync. How their game engines handle rendering and whether they can stay at 60 or 90 or 120 frames per second. And that's going to be the difficulty. Because if we make perfect hardware, developers still have to make low-latency game loops.