|
As technology evolves, how do we interact with it? An iPhone is far more powerful than your average desktop was just a few years ago. And it has a camera in it. And it's got your schedule, and your contact list. And it has games. And, occasionally, it's even a phone.
We rely on our technology to an incredible degree. Have you ever been caught without your smartphone in a foreign country, or an unfamiliar town? It's almost existentially terrifying to realize how little we can do without our contemporary tech; that's why we have it around us all the time.
What does this convergence mean, not just for the future of games, but the future of technology in general, and the way humans interact with it?
Epic Games' Tim Sweeney knows a lot about tech, there's no denying it. In addition to being one of the paragons of game code, he has also read extensively about the limits of human perception, and the nature of technological convergence.
In this extensive interview, we speak with him about the possibility of a graphics plateau, the promise of Unreal Engine 4, and what might happen if technology were all around you, all the time -- even more integrated into your life than it is now.
Back in 2008 I wrote about something that I think you would totally disagree with -- I felt we were starting to see diminishing returns on graphics, just in terms of whether people really cared about new particle effects or lighting, when "good enough" seems to work for many, in games like League of Legends that aim for compatibility over poly-pushing.
I know that you're very interested graphical advances from a code perspective, and from a what-you-can-achieve perspective, but do you think that also pays dividends for the audience?
Tim Sweeney: Yes. We're still at the point where improvements in graphics technology are enabling major improvements in gameplay. Just the ability to do real-time lighting on environments now means you can construct a completely dynamic environment -- or destruct a completely dynamic environment -- and have all of the lighting respond accurately. It turns out that the technical features you need for that are really elaborate and expensive.
If you have your own support for real-time lighting like Doom 3 had, then all of your areas that are directly hit by light are bright, and all of the areas that aren't directly hit by light are completely black. So you need real-time indirect lighting, which means calculating two bounces of light on them, and so on, which really is only becoming possible now with today's GPUs, that are 2.3 to 2-and-a-half teraflops.
Even if your thesis is that we're getting diminishing returns on graphical effects, I think we're still at the point where making graphics innovations greatly improves our capability of implementing new kinds of games.

I saw the Unreal 4 demo, and it's very good-looking. But the thing that I found interesting, at least framed by what you're talking about, is that in a certain respect the closer you get to reality, the less impressive it is in a way, because as it gets closer to a "real thing," I know what real things look like, and we get into an uncanny valley situation. Obviously, we do this in a more fantastical setting, but that move toward reality to me almost seems like it's going to, at a certain point, start being less impressive.
TS: We don't necessarily want to simulate reality because reality is pretty boring, right? (laughs) Simulate realistic characters in a game, and they're probably just sitting around sending their friends stupid messages on Twitter. You want fantastical environments and fantastical characters, and that's really the big job of an engine -- it's not just to enable graphical realism but also to give our artists and designers the capability to really tweak things to create a custom look and feel for the game, and a custom enhanced version of reality that they can play around with consistently.
You're trying to solve a lot of new problems with UE4 -- indirect lighting, more efficient and dynamic particle effects, and that sort of thing. But what about some of the legacy problems that are still not totally solved, like shadows that are jaggy everywhere, and dynamic texture loading so that it doesn't have a pop effect; these sorts of things?
TS: Well, each generation, we improve. We greatly reduce the flaws that you see, but we're still far from having enough hardware performance to completely eliminate them. The jaggies in shadows in Unreal Engine 1 were 3 feet wide, and now they're just a few inches wide. And that's great, but until they're much smaller than a millimeter you'll still notice those artifacts. Really, the amount of performance you need to solve this completely is immense. I think we're just slowly moving in the right direction there.
The technology is solving other problems. For example, texture streaming has been a huge challenge given optical media. When you're playing Gears of War off a DVD, sometimes you see textures popping in just because we can only move the DVD head four or five times a second in order to load the textures in. If textures are coming into view at a faster rate, then you're screwed. If you look at what's possible now with solid state disk technology and flash memory storage, you have a factor of 10,000 less latency.
It's pretty significant.
TS: Oh, yeah! It's gigantic! It's able to greatly, greatly reduce some of those flaws. Every generation we're improving a lot of things, but we're still a long way from being able to simulate reality. For a long time, the Holy Grail was completely destructible environments; that means you basically have to build your game levels using architectural tools and engineering analysis so that, when the right amount of force is applied to your wall, it breaks. Then your level designers aren't just creative folks; they're structural engineers. There are significant barriers to a lot of advancements in those areas.
The thing I find funny about completely destructible environments is that any game could just become a flat plane at a certain point if you just blow everything up.
TS: (Laughs) You want to be able to completely destroy the world?
Yes -- some sort of antihero complex, probably, or maybe I just like playing Earth Defense Force.
|
Because for a long time, it was the most reliable way to create a competitive advantage for your product. As much as people dislike that concept, I believe they will dislike it even more when that is no longer the case. Welcome to a new cut-throat future where design innovation has dried up, technological improvements are minimal, and everybody fights over scraps.
Epic, as a tool builder, needs to sell their tools. And to show off how much better their tools are compared with their competitors they need to create awesome tooling tech demos.
Just because an engine supports photorealism doesn't mean you have to go for that style. Just look at Borderlands or Dishonored.
one bit struck me hard:
"when you take that device and broadcast that image to the TV, then suddenly you just have a big, flat mouse-like surface. "
- for some reason this got me excited about the exact opposite experience: controlling your mobile device(s) with your console's controller.
How cool would it be to have game that encourages you to lay out your tablets and phones so you can see them, and has the game's focus jump among them - so you just hold the wireless controller? (+imagine a game that let you control a wildly different game on your tablet, with the same controller output that you're sending to your main console? gears on the screen, mario on the tablet. it'd just be funny and weird)
(+ imagine instantly shifting your button mashing focus to a side game on your tablet, because the main game is loading)
(+ imagine a game that encouraged a second player to hold up screens for you, to help out. maybe rotating and moving them as well.)
we mostly think about SmartGlass as a slow response supplement, like those damnable bluray movie tie-in apps.
but would it be possible to enable your "other device" to directly pick up a wireless controller's signal?
(how far from being bluetooth devices are these console controllers?)
... I dimly recall some mysterious magic around xbox controllers or systems having a "proprietary chip" (or whatever magic) that made it near-impossible to create your own wireless controller for the system. But I'm curious if it'd be possible for a smartglass app to enable direct communication with tablet or phone.
?
With paper inside.
Remember how easy it was to read an entire story without having to turn a page? And you could just put it down and pick it up later and you'd be at the last spot you read (bookmarks? What a joke!).
/end sarcasm
@Doug Poston
you mean these scrolls?
http://en.wikipedia.org/wiki/Dead_Sea_Scrolls
http://www.usc.edu/dept/LAS/wsrp/educational_site/dead_sea_scrolls /copperscroll.
shtml
https://www.google.com/search?
q=ancient+tablets&hl=en&tbo=u&tbm=isch&source=univ&sa=X&ei=F7LkUM7 GLK2JiwL9x4GYAg&sqi=2&ved=0CDIQsAQ&biw=1304&bih=683
or these?
http://www.google.com/imgres?hl=en&tbo=d&biw=1304&bih=683&tbm=isch &tbnid=4n-IUBt
Z7Jac4M:&imgrefurl=http://www.facebook.com/note.php%3Fnote_id%3D1015013 759123866
1&docid=F-1zvR6uEC5r0M&imgurl=http://sphotos-a.xx.fbcdn.net/hphotos-ash 4/208465_
10150149302427662_629772_n.jpg&w=600&h=400&ei=zbLkUPj2NqWBiwLRioE4&zoom =1&iact=h
c&vpx=605&vpy=305&dur=3946&hovh=183&hovw=275&tx=180&ty=98&sig=111718887 821499247
207&page=1&tbnh=150&tbnw=222&start=0&ndsp=33&ved=1t:429,r:29,s:0,i:181
or do you mean these scrolls?
http://en.wikipedia.org/wiki/Braille
http://www.youtube.com/watch?v=6de_SbVUVfA
http://vimeo.com/5323117
http://statues.com/mrc/
Or, what about the ever popular:
http://www.cafepress.com/+smiley-face+t-shirts?utm_term=smiley%20f ace%20t-shirt&
utm_source=google&utm_campaign=humor%20apparel%20-%20us&utm_content=sea rch-e&utm
_medium=cpc
or,
http://www.stewartsigns.com/signs_led.php?code=x5GOOG&gclid=CNSBzL PkyrQCFSPhQgod
4i4A-A
evolution, or vanity?
A lot of the other ideas that are being thrown around have potential, but are not anywhere near being a finished, polished, usable format. I think you have another solid generation of consoles coming out ahead. After that, who knows? Maybe you'll be wearing your display device around with you everywhere you go and your TV, computer, monitor, and iPhone screen have vanished because it's all mounted to your head."
I think the nintendo controller is probably the best working human adaptive device on the market, it's shaped for human use and ergonomics. It has a lot of flaws however the biggest being the control loss on mini-joysticks .. I think maybe switching to a more tuned device with "level" sensory is the more practical way to go .. not a full body suit .. but perhaps much more intuitive like driving a car. A lot of people find the "airsticks" to be good, with peddles for mechanical simulators ~ yet not perform as "quick" as a keyboard. This isn't a problem for abstraction, but realism.
Where abstraction would work better is with "walk up" table/desk top holographics (starwars battle chess) .. or multi monitor (all face up for multiple user capacity. the difficulty here is in recognizing how much input is going in one ear and out the other. That's the real trick, how much information is necessary to deliver and idea, and how many ideas.
I find even now, with lesser simulations I am more satisfied And it's not just a realism/abstraction issue. It's an issue of fatigue. What's important, what ideas are you really trying to communicate? Does your user interface offer "down time", or information scaling? When you figure out that the rest falls into place hopefully, with enough well oiled technology.
Coming from a story game emphasis, I'm more excited about layered reality tech (AR glasses, AR video chat etc) than the virtual end of the spectrum. A characterization engine that even approaches the uncanny valley seems to be decades off. -But if you layer stories on top of the material world and or live human communication channels you can solve that problem immediately and open up new value (meaningful social collision etc).
Hyperbole.
I know people who purchase books because of the art on the cover in spite of the old saying. Look at any landfill to see all of the perfectly functional kitchen cabinets rotting in the sun because the US alone wants to spend over $1B each year to make them pretty again.
Graphics are not a question of hyper-real but a question of what we perceive to be perfection. How close can we get to the vision in our minds? How close can we get there in real-time, and still have budget to make some semblance of a game around it.
Everyone is chasing the dream of visual perfection at all cost because it is the easiest way to instantly connect someone to your product without the grueling effort of the audience having to visualize it for themselves. Graphics will always prevail for the same reason(s) Megan Fox keeps getting work.
Search around You Tube for 2013 trailers. There you will find a long collection of quick high impact scenes with hair raising "dong" sounds and a visual feast of crumbling buildings, undead creepers, and people screaming in slow motion. This is the language that people understand. It's no secret why games are clawing at cinema and trying to be like them; we walk out of that 2 hour visual orgasm feeling like you need a cigarette when its over. Why wouldn't you want to copy that? After all with a number like 99.999-% chances are pretty good that the people with the money, and sometimes the vision, feel safe in their little box of pretty things. No one wants to look at the ugly box, ewww.
Do you know what the top selling books are right now (and typically)? Self-help in one form or another, whether it be religious or weight loss or even financial success. Many of the top selling books on Amazon are books where people are unhappy with their imperfect self.
Graphics go beyond digital. So buck up boys, it's time to add more hours to your shift... Gamers want perfection and will not settle for less.