The term "Next Gen" is bandied around pretty freely nowadays, but what does it mean? Some would say that it's a term that loses all meaning as soon as what's part of the Next Generation becomes current. There are those who claim it's all about delivering a higher level of graphic and sound immersion. Of course, there are the gameplay people who want to see exactly how far we can push simulations by using multiple cores to provide better physics and interaction.
Finally, you have John Gaeta and Rudy Poat - these guys want to get you off of the couch and into an interactive cinema experience, and who better to do it? John and Rudy both worked on The Matrix - John won an Academy Award for his effects which include "Bullet Time," and then went on to work on What Dreams May Come. John is currently working on a few projects and Rudy is a Creative Director with EA Vancouver. We sat down with Rudy first, to talk shop about the new process they've created.
Gamasutra: Can you give us a quick rundown of your project and the technology behind it?
Rudy Poat: Which one? Deep Dark or Trapped Ashes?
GS: Trapped Ashes.
RP: Trapped Ashes was our first experiment with interactive real time cinema. We wanted to do something to deliver frames in high definition (HD) that were ready for film the minute they come out of the box in real time. Compositing and everything is done in-engine. What's cool about the film is that everything needed to be abstract, these soft membrane-like images, and that's really hard to do in real time. Everything looks like Metal Gear or the Unreal Engine now, and we wanted to get away from that look in this movie. We decided to try to tackle it with a real time engine, so we built a way to do those shots.
So, the shots are not only created and delivered in real time HD, they can also be loaded up at any time and you can move around in real time. These shots also run on a server, so on a network, a camera man could log in and film in real time. Another person could log in as a lighter and have him moving the lighting around while the camera man is taking pictures. You can have several people at a time logged in working on the film.
GS: That's a very interesting way to go about editing. With so many people able to log in and modify the film, is there a locking mechanism like the ones used on databases so that one person can use one item at a time?
RP: Only if you run into a limited frame rate. Like, with the machinima stuff, it's a network capable engine so it's collaborative. For instance, one of these shots in Trapped Ashes, we can have three or four people actually collaborating. We could have a lighter, a camera man and an animator. The animator could be moving the fetus, because there's a fetus in one of the shots.
RP: They could all be chatting with each other on a mic, at the same time, and the camera could be recording all of that data and streaming it straight to film. It's pretty neat. I know the machinima guys are using game engines to make these little movies. They all log in and do these stunts while recording them. We're actually building it that way from the ground up. The big thing for us is the content and not the engine.
Our next project would be something like this, but taken steps further. We're giving it the looks and real time experiences that haven't been seen in games yet. We want to make everything from interactive storytelling, to maybe PS3 online delivery. It could be some kind of online TV show that you could interact with or create characters that would show up later on. We're dealing with getting content together. The engine is the foundation of it, Trapped Ashes was the first test and one of the first times anyone has delivered a real time film that you can interact with the shots. We want to try doing an E3-type thing where we can show the film and then say "why don't you try making the shot with the same database?" That way you could be creating film on the fly. That's where we're going: TV, short films, artsy interactive cinema. It's all about content. We can build the engine again and utilize the parts of a lot of real time tools out there. The thing is how we're doing, putting it together and how we marry that with the content.