Over these last years I must admit that many of us have become complacent with modern graphics. If you ask most games journalists and general enthusiasts, they will tell you that the graphics are okay but aesthetic has become the new buzzword.
Many people don't bother to read the specs for video cards anymore or dig through the internet for obscure facts like how many giga-flops it can crunch or how many raw triangles per second it can transform.
The fact is that if you've purchased an decent gaming PC at any point as far back as 2007 you probably have a PC that will play most console ports. Another trend that has slowed the progress of technology is a strong presence of 2D games on mobile devices finding their way to PC as well.
It's no secret that creating a product for the lowest common denominator is a surefire way to be as mainstream as possible. What better way to play on grandma's 1990 Tandy PC than to hold back your technical ambitions to the barest of drawing techniques?
It's not all bad to push technology to the side and give the reins to the artists, but there is an argument to be made for pulling back on those reins and shifting control back towards the engineering marvels we saw in the early days of hardware acceleration. My justification for this comes from the fears that many developers have about the next generation.
You have some developers claiming that budgets will increase anywhere from 50% to 100%, meaning that we could be seeing your average game budgets in the $100M+ range. Much of this cost will have to do with the sheer amount of content and the level of detail for that content. It will take a small country to produce the content we need under the technical constraints of most real-time systems.
But what if the technical constraints were alleviated (to a degree)? Instead of the NextBox or the PS4 being a mainstream 1.5 version of their predecessors, what if these devices were absolute powerhouses instead?
My reasoning here is simply this; no one is crazy enough to spend $150M or more on a game that is of risk of completely failing but if games could be made on the same scale or less, and more effort could be put into better lighting, shadowing, real-time effects, truly intelligent AI, the line is blurred. Games can creep closer to film quality visuals but it doesn't have to mean more triangles.
It is shocking what a couple of post process effects can do for an image, or enabling 16x+ anti-aliasing, or a completely dynamic lighting system over the oft standard of costly and time consuming baked environments. This thought that we need to continually toss more triangles at a problem feels like an old trope that we've trained ourselves to believe.
More content is not the solution to cutting budgets, I feel that a smarter use of that power to procedurally improve the look of the game will result in a game that feels next generation without costing like a next generation game.
Personally, in the next generation I would like to see what processing power can do to improve the assets that are being created today. Adding more GPU cores I feel would be a bigger win than adding a second screen or including a Kinect 2.0 with every console. Raw and unadulterated processing power is just the thing to break the chains of these budget concerns and maybe spark a little movement in the PC space as well.
It sounds completely ironic because we assume that most developers will instantly consume this power with 100k triangle eyelashes for their RTS units but that is exactly what needs to be avoided if you want to stay in business. The big win in the next generation will not be screens or motion controls, it will be the ability of each developer to find ways to look that much closer to a Pixar quality film, but do it in 33ms or less.
The only way this is going to happen in a reasonable budget is not through pure aesthetics, and definitely not more content creation, but through a strong technical pipeline that needs raw power to make that happen.
...Fingers crossed that Sony and Microsoft do the right thing.