Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
November 22, 2019
arrowPress Releases







If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Opinion: Rendering Technology: What's Next?

Opinion: Rendering Technology: What's Next?

March 31, 2011 | By Szymon Swistun




[November Software co-founder and ex-Force Unleashed 2 lead engineer Szymon Swistun (@sswistun) talks about his concerns with increasingly complex workflows, in this #altdevblogaday-reprinted opinion piece.]

Typically, the more cutting edge rendering features we add to our games, the more complex the workflow for the engineers, artists and content creators that deal with them.

There are many brilliant technology advancements that are in development or near release: real-time ray-tracing, real-time global illumination, major DirectX API changes, CPU GPU fusion, mega-meshes, just to name few.

However, our industry is already facing a tremendous problem: the exponentially increasing cost of game development. This will only get worse, very fast, unless simplifying workflows for all disciplines involved with game development is drastically improved.

This is my first blog post on AltDev, first blog post ever actually I dont have my own personal blog and recently got on Facebook and Twitter at the start of this new year. So I am a bit out of touch. I had my head down deep in the trenches shipping games the past six years and just recently left LucasArts.

I saw [#AltDevBlogADay founder and Insomniac engine director] Mike Acton's line on his LinkedIn, "None of us could have done our jobs well without relying on the work of others and those giants that came before us, and we all have a duty to return that favor." This compelled me to start sharing my insights and what I have learned over the years.

Early Generation Rendering Technology

I was in school at the time and looked up to Carmack and Abrash. I am still amazed at how the tech they wrote for Quake is so relevant and directly influences new features today. They had per-pixel occlusion culling (no overdraw) working for all their static geometry rendering in 1996 on the CPU!

I recommend reading Abrashs Ramblings in Realtime for anyone with a passion for squeezing everything out of nearly nothing. I wont comment anymore on this part of the time-line since I did not ship any games then. Anyone who has, I encourage to fill us in with your comments below.

Current Generation Rendering Technology

Content is King! That was the dominant message when I worked on my first game, which happened to be an Xbox 360 launch title, Madden 06. The graphics paradigm had just made a huge shift away from a fixed function pipeline to a programmable one via shaders. (Yes, Carmack developed them way beforehand, but these new ones are cooler. ;) )

Polygon counts on geometry drastically increased, along with texture sizes, and the addition of improved post processing effects: Haze, Bloom, HDR, Tone-mapping, Color LUT, etc. More recently, studios have developed very complex lighting models, deferred lighting to support 100+ lights in the view, better dynamic soft shadows, dynamic ambient occlusion, per-pixel motion blur, screen space anti-aliasing methods, and much more.

With the PS3 and its SPUs we found ways to push the hardware past the boundaries of that generations GPUs at times, proving to me at least the incredible potential of combining CPU and GPU architectures in the future.

On top of shaders, new techniques for authoring them have grown in popularity, such as shader node graph editors, that give even more control over surface shading to the artists (I will leave my rants on this for another post). There are some intriguing techniques in development as well: mega-textures and voxel ray-tracing geometry from id, and mega-meshes from Lionhead.

These in particular are brilliant methods that abstract data complexity from the limitations of the hardware they are intended to execute on. However, if not addressed with an additional layer of simplifying the workflow of generating the extra amount of content that is now possible, the artists and content teams will be expected to do even more work then before

I am really interested in the process and tools that help simplify the work for the artists and content teams when using these new techniques. My guess is that the art team does not have the time to paint in all of the detail of an entire mega-texture by hand with out some procedural generation assistance. It would be great to hear more on this from those artists.

Take Away

There is a tremendous amount of increased complexity to what goes into the content that makes high quality games look as good as they do and still run in real-time. As a direct result, team sizes are sky rocketing from 10-40 people to 70-250+, even without counting outsourcing.

Whats Next?

The most obvious trends point to increased use of tessellation via displacement mapping and subdivision surfaces, more control over raw hardware access to developers, and much more complexity and detail in shaders, lighting, textures and materials.

My fear is that if we dont drastically change our workflows to be exponentially simpler then they are now, we may find ourselves out of our cool jobs. Instead of innovating new rendering techniques and pushing the hardware, we might end up optimizing flash performance while working for a giant social gaming company sooner then we think.

Did you know that some are successfully releasing new features every week? Please tell me Im crazy and that this fear is nonsense. It would help me sleep better at night.

My goal here is to simply raise awareness about this issue, which I am also currently addressing. I will go into more detail on solutions to these problems that I have experienced in my next posts. It would be invaluable to get the best minds trying to help simplify workflows for all disciplines in game development. Workflows might not get the rockstar status that new rendering tech does, but theyre something that need to get done. Its time to examine and invest in the work others rely on.

I would love to see some comments from the artists and content teams working directly with some of the more recent advancements that have the potential of shifting the tide of development costs: mega-textures, voxel ray-tracing, mega-meshes, dynamic GI tools, procedural content generation.

[This piece was reprinted from #AltDevBlogADay, a shared blog initiative started by @mike_acton devoted to giving game developers of all disciplines a place to motivate each other to write regularly about their personal game development passions.]


Related Jobs

Disbelief
Disbelief — Chicago, Illinois, United States
[11.22.19]

Senior Programmer, Chicago
Disbelief
Disbelief — Chicago, Illinois, United States
[11.22.19]

Junior Programmer, Chicago
Sony PlayStation
Sony PlayStation — San Diego, California, United States
[11.22.19]

Environment Outsource Review Artist
Sony PlayStation
Sony PlayStation — San Diego, California, United States
[11.22.19]

Concept Artist









Loading Comments

loader image