Gamasutra: The Art & Business of Making Gamesspacer
Raw Power Still Matters
Printer-Friendly VersionPrinter-Friendly Version
View All     RSS
April 16, 2014
arrowPress Releases
April 16, 2014
PR Newswire
View All
View All     Submit Event





If you enjoy reading this site, you might also want to check out these UBM TechWeb sites:


 
Raw Power Still Matters
by Benjamin Quintero on 02/15/13 09:55:00 am   Expert Blogs   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

Over these last years I must admit that many of us have become complacent with modern graphics.  If you ask most games journalists and general enthusiasts, they will tell you that the graphics are okay but aesthetic has become the new buzzword. 

Many people don't bother to read the specs for video cards anymore or dig through the internet for obscure facts like how many giga-flops it can crunch or how many raw triangles per second it can transform. 

The fact is that if you've purchased an decent gaming PC at any point as far back as 2007 you probably have a PC that will play most console ports.  Another trend that has slowed the progress of technology is a strong presence of 2D games on mobile devices finding their way to PC as well.

It's no secret that creating a product for the lowest common denominator is a surefire way to be as mainstream as possible.  What better way to play on grandma's 1990 Tandy PC than to hold back your technical ambitions to the barest of drawing techniques?

It's not all bad to push technology to the side and give the reins to the artists, but there is an argument to be made for pulling back on those reins and shifting control back towards the engineering marvels we saw in the early days of hardware acceleration.  My justification for this comes from the fears that many developers have about the next generation. 

You have some developers claiming that budgets will increase anywhere from 50% to 100%, meaning that we could be seeing your average game budgets in the $100M+ range.  Much of this cost will have to do with the sheer amount of content and the level of detail for that content.  It will take a small country to produce the content we need under the technical constraints of most real-time systems.

But what if the technical constraints were alleviated (to a degree)?  Instead of the NextBox or the PS4 being a mainstream 1.5 version of their predecessors, what if these devices were absolute powerhouses instead? 

My reasoning here is simply this; no one is crazy enough to spend $150M or more on a game that is of risk of completely failing but if games could be made on the same scale or less, and more effort could be put into better lighting, shadowing, real-time effects, truly intelligent AI, the line is blurred.  Games can creep closer to film quality visuals but it doesn't have to mean more triangles.

It is shocking what a couple of post process effects can do for an image, or enabling 16x+ anti-aliasing, or a completely dynamic lighting system over the oft standard of costly and time consuming baked environments.  This thought that we need to continually toss more triangles at a problem feels like an old trope that we've trained ourselves to believe. 

More content is not the solution to cutting budgets, I feel that a smarter use of that power to procedurally improve the look of the game will result in a game that feels next generation without costing like a next generation game.

Personally, in the next generation I would like to see what processing power can do to improve the assets that are being created today.  Adding more GPU cores I feel would be a bigger win than adding a second screen or including a Kinect 2.0 with every console.  Raw and unadulterated processing power is just the thing to break the chains of these budget concerns and maybe spark a little movement in the PC space as well.

It sounds completely ironic because we assume that most developers will instantly consume this power with 100k triangle eyelashes for their RTS units but that is exactly what needs to be avoided if you want to stay in business.  The big win in the next generation will not be screens or motion controls, it will be the ability of each developer to find ways to look that much closer to a Pixar quality film, but do it in 33ms or less. 

The only way this is going to happen in a reasonable budget is not through pure aesthetics, and definitely not more content creation, but through a strong technical pipeline that needs raw power to make that happen.

...Fingers crossed that Sony and Microsoft do the right thing.


Related Jobs

Linden Lab
Linden Lab — San Francisco, California, United States
[04.16.14]

Sr. Front-end Web Developer
Linden Lab
Linden Lab — San Francisco, California, United States
[04.16.14]

Sr. Software Engineer, Back-end
Linden Lab
Linden Lab — San Francisco, California, United States
[04.16.14]

Lead Engineer
Nix Hydra Games
Nix Hydra Games — Hollywood, California, United States
[04.16.14]

Programmer






Comments


Jacob Pederson
profile image
I'm not a game dev by any stretch, but aren't assets being made right now at much higher resolutions/poly counts than are actually needed for the game, then scaled back as needed? I don't think asset quality and detail are going to be the huge cost increasers this time as they were in the transition from PS2 to PS3 graphics.

I suspect that much of the cost of switching generations this time will be from all those hordes of developers needing to learn new tools, techniques, and UI's for the mandatory new inputs coming out for WiiU (gamepad), Microsoft (mandatory kinect?).

I totally agree with you (and Carmack) that pure image quality stuff like texture filtering, anti-aliasing, or lower latency frames are epic in terms of what they can do for the quality of a game; however, I don't think we'll be seeing these kind of post processing on consoles . . . ever, with the exception of some vastly scaled down things like 2x anti-aliasing. Why? Because consoles are by definition a cheaper PC, while post processing techniques are by definition, expensive.

Benjamin Quintero
profile image
Jacob that's not exactly how it works but all studios handle their normal map pipelines differently. Some studios paint them because its faster but some studios model it for accuracy. Even still, when the high poly model is made its not always with the greatest of care. Many high poly versions of game models are pushed, pulled, pinched, and contorted anyway you can to get the shape you want. They don't always make for well animated geometry because it was never their purpose. And rigging and uv mapping a 1M triangle character is a very different beast than a 10k game character. Cinema uses something called PTex (i think thats the name) which maps a texture to each quad just to avoid uv mapping high poly characters. The closest we have to that in games is idSoftwares Megatextures.

EDIT: http://ptex.us/overview.html This will explain more of what some cinema studios do to avoid the hassles of trying to map a 2D texture onto a high resolution 3D model.

Kris Graft
profile image
"Adding more GPU cores I feel would be a bigger win than adding a second screen or including a Kinect 2.0 with every console."

I don't necessarily disagree with your sentiment about raw power and where to direct it, but one of consoles' biggest draws is unique, standardized input. I don't think that motion or tablet control is necessarily the answer, but when an input method successfully connects with an audience (as in the case of the Wii), it does wonders for making a platform exciting and relevant.

Also a console maker can play the graphics game all it wants, but as you know, once it launches, hardware-wise it's a static platform for years and years to come. If you're really interested in the constant evolution of raw power, there's always the mobile space...

P.S. Thanks for contributing to our blogs!

Benjamin Quintero
profile image
Kris I completely agree that mobile is seeing huge leaps but that is only because they are still and forever playing catch up. It will be hard for any unplugged device to compete with high end gaming hardware any way you slice it. With consoles, their being static is even more reason to come out strong unless the plan for new consoles is to mirror the pace of mobile or PCs semi annual releases.

I encourage experimentation but honestly I question the cost benefit of something like a second screen, and motion controls I think have overstayed their welcome. Id sooner like to see a handheld with Oc Rift support than more of the same we saw from this generation, especially since it is at the cost of more processing hardware.. Thats all Im saying. Would you rather have Microsoft spend that $50-$100 on embedded Kinect hardware (passing the cost to consumers) or an additional core or maybe another 1GB RAM? I can think of a million things to do with another GB of RAM, but I can only think of one to do with a Kinect ;).

Jonathan Jou
profile image
In spirit, I agree with the sentiment that there is still untapped potential in console hardware improvements. However, I'm inclined to look at the PC market, where some companies have made a living pushing polygons and pushing modern GPUs to their limits. To me, this is a telling tale: pushing polygons is easy; making things look *really* pretty requires more than a willingness to forsake lower end hardware configurations.

Are you thinking the next consoles should be able to make graphics comparable to PC marvels? It doesn't seem like too many games have even pushed current consoles to their limits, and even then they aren't actually too far from what the PC market has to offer. Viewtiful Joe, Okami, and Wind Waker came out on last gen hardware. Skyward Sword came out on the least powerful of the current generation, and Borderlands 2 looks not too different from its first entry.

If I were to guess, my guess wouldn't be that current gen hardware is what's holding the industry back. I'd say that the hardest part in doing something other than throw triangles at the problem is figuring out what to do at all. Procedurally gorgeous art could indeed potentially benefit from more powerful hardware, but I'm of the opinion that the real challenge will have less to do with procedures and more to do with art.

Which is to say, art is hard. Even if we made tech easier, art would still be hard. And as the industry currently stands, it's already beginning to seem like the art is harder than the technological hurdles.

Benjamin Quintero
profile image
Jonathan yes those titles you listed were all great but they were also all toon shaded games. Though toon shading does seem to stand the time, much like 16bit pixel art, it's just one technique. More processing power is not going to stop toon shaded games from appearing, if anything they will start to look more and more like their TV counterparts. Some of Naruto games are getting scary-close but they are not quite there yet.

To your comment about art being hard, many professional artists would tell you that their biggest challenge has less to do with art and more to do with making art under tight resource budgets. It's tough to make a believable character or environment when the lead Engineer comes by your desk and says you need to shave another 50% off of your memory footprint for level 3. Of course resource hungry game developers will always take what you give them, and then some, but if the NextBox is simply a quad-core PC with a 2008 video card in it then it will suffer the same fate that Wii U is suffering right now.

To your comment about PC, there is no high end market BECAUSE of consoles =). Crytek tried to make a high end PC game and no one bought it. Eventually they gave up and started making whatever they could fit onto a console. If consoles at least raised the bar then it would spark some growth in the PC market as well. As it stands, hardware is at a stalemate and mobile is catching up, but only because everyone else is slowing down. Soon mobile will be in the same boat as everyone else as it starts to hit physical limits and battery problems abound.

I'm not saying that games like the one's you mentioned can't continue to be made on the current hardware, but I can promise that more memory and more processing power (on a scale of magnitudes, not a 0.5 increment) would produce better games, faster, if those resources were used intelligently. Imagine all of those great games you mentioned with no load times! Literally, all of Wind Waker is probably less than 1.5GB that would be mapped and loaded to a RAM drive. That is just an abuse of power but it goes to show how processing power can actually make development cheaper and easier.

Jonathan Jou
profile image
You've hit on the exact point I'm trying to make: I believe resources are already being squandered this generation, and that the era of developers who focus on making fun games, not optimized games is well on its way. Back when performance literally limited the number of pixels you could draw on screen, it was hard to make things look really good. Now we have a slider, and I slide that slider to the maximum, and to be honest graphics get closer to the uncanny valley than they do to photorealism. This is a barrier I don't think more tech will break, certainly not just because Microsoft and Sony put some oomph into their hardware.

I'd still point you to things like TERA, Guild Wars 2, and Dear Esther as proof that pretty games are being made. In fact, I'd point to them and suggest that all that power isn't being put to use. Square Enix's Luminous engine is gorgeous, RAGE was gorgeous, and I can certainly appreciate how visually stunning they are. On the other hand, I'd find it hard to believe that they require hardware that doesn't yet exist. I'd say if Microsoft and Sony put a graphics card from the last 18 months into their console, they'd get more than enough power to enable gorgeous games.

So is the problem a lack of hardware? I'm sure hardware could help. But I guess I don't share your optimism on how capably developers will make use of the technology--there were games that were visually stunning before, and there will be visually stunning games in the future, but I feel like that depends just as on the man as it does the tools.

Benjamin Quintero
profile image
Jonathan, no argument there. A card in the last 18 months (+ the added bonus of share memory architectures) would be a great boost. I guess time will tell what they settle on but all signs so far are pointing to fairly dated specs (all rumors for now). And the uncanny valley is a product of some developers chasing the wrong carrot =). I was totally fine with the sort of realistic comic style of Rage. It was just on the cusp of getting weird but maintained enough painterly style to keep me in tune with their vision.

Also, I shouldn't harp on graphics alone. When I am referring to "raw power" I am also talking about CPU, and RAM, not just pixel bandwidth. RAM is what helps artists sculpt more interesting worlds and engineers cache more computationally expensive information. CPU is what let's us crunch awesome AI and perform hundreds/thousands of LoS checks per frame or create dynamic physical worlds, or perform more aggressive and more accurate path finding to keep AI from looking stupid as they do when they run the wrong way or hide behind a 4" lamp post. CPU and RAM are what allow sound engineers to mix and apply real-time post effects to audio, or play hundreds of independent creeks and plops to set the mood. It's the whole package. There are things that you just can't do on current hardware, or things that maybe can be done but at a significant sacrifice of nearly everything else.

Ultimately I think we agree on the same points, maybe just from a different angle. I don't think that pushing more polygons is the solution, my point is that being smarter with this "next gen" power, whatever it turns out to be, will be the bigger win over the classic churn and burn of simply more content (ie: bigger budgets). The era you are referring to is happening because of growth in technology, because of the leaps and bounds we've experienced in processing power.

Mark Morrison
profile image
maybe the entertainment industry and hardware makers aren't looking at it from a realistic perspective? check this out: http://graphics.pixar.com/opensubdiv/

Amir Sharar
profile image
I think it's obvious that these new systems are not dedicated gaming machines, just as modern phones aren't just phones. Features not directly related to gaming are certainly going to contribute to the cost of the machine, as you've stated.

With that in mind, I think the most effective way to convince hardware makers to pack more punch in their consoles is to tie the benefits of graphical processing to all of these other non-gaming features. With enough processing power, there is little reason why the OS should stutter or hesitate when gamers move from their game, to Twitter where they can share their experience, to YouTube where they've uploaded a clip of what they just played. More RAM, the use of the GPU and a high end CPU to assist in these functions, they can make this experience much smoother. If there lacks the required power, users may not bother with such features (look at the current implementation in current games where a lot of waiting is involved), therefore it can be argued that cutting edge technology is required for user adoption of these features.

That said, it's a little too late to change anything unfortunately as both consoles are set to appear fairly soon with the general indications are that both consoles are sitting slightly lower than high end PCs (with the PS4 outclassing the new Xbox in the graphical department).

When it comes to coaxing hardware makers to feature cutting edge technology for gaming purposes, really, no better arguments can come from those other than Epic and CryTek, who will likely continue in developing multiplatform engines for both. Just as Epic demonstrated to MS how the 360 would benefit from more RAM (with an actual demo), you would hope that something similar would have occurred to demonstrate to Sony/MS the best cost/benefit hardware combination.

Benjamin Quintero
profile image
Yeah I'm pretty sure that demos like Samaritan and UE4's lighting model were a big wink and a nudge for both vendors to put on their big daddy pants, but it's also likely that the same will happen to those demos that happened to the original UE3 tech demos.

All of the fuzzy penumbra dynamic shadow with translucent colored surfaces were all cast aside and light maps came back strong when they realized that you could still barely get 30fps with even baked lighting, but yes thank goodness they didn't ship with 256MB. What a disaster that would have been for them; like N64 with all it's infinite power for its time, shackled by tiny ROM cartridges... =) The genie syndrome.

Justin Sawchuk
profile image
All they need to is start pushing out a new console every 3 years rather than waiting 8 that was terrible.

Jonathan Jennings
profile image
if they start that I might as well start upgrading my PC and save a buck

Allaiyah Weyn
profile image
Yes, ditch the emphasis of graphics being the most important thing. I can say I've enjoyed low res cartoony games from the 90s more than anything big budget that came out after. & find some other way of unfolding a story other than cutscenes, unless you want to save that as a bonus movie DVD.


none
 
Comment: