The technique was first mentioned by Crytek among some of their improvements (like screenspace raytraced shadows) in their DirectX 11 game update for Crysis 2  and then was mentioned in couple of their presentations, articles and talks. In my free time I implemented some prototype of this technique in CD Projekt's Red Engine (without any filtering, reprojection and doing a total bruteforce) and results were quite "interesting", but definitely not useable. Also at that time I was working hard on The Witcher 2 Xbox 360 version, so there was no way I could try to improve it or ship in the game I worked on, so I just forgot about it for a while.
On Sony Devcon 2013 Michal Valient mentioned in his presentation about Killzone: Shadow Fall  using screenspace reflections together with localized and global cubemaps as a way to achieve a general-purpose and robust solution for indirect specular and reflectivity and the results (at least on screenshots) were quite amazing.
Since then, more and more games have used it and I was lucky to be working on one - Assassin's Creed 4: Black Flag. I won't dig deeply into details here about our exact implementation - to learn them come and see my talk on GDC 2014 or wait for the slides! 
Meanwhile I will share some of my experiences with the use of this technique and benefits, limitations and conclusions of my numerous talks with friends at my company, as given increasing popularity of the technique, I find it really weird that nobody seems to share his ideas about it...
Advantages of screenspace raymarched reflections are quite obvious and they are the reason why so many game developers got interested in it:
(*) When I say it is trivial to implement means you can get a working prototype in a day or so if you know your engine well. However to get it right and fix all the issues you will spend weeks, write many iterations and there for sure will be lots of bug reports distributed in time.
We have seen all those benefits in our game. On this two screenshots you can see how screenspace reflections easily enhanced the look of the scene, making objects more grounded and attached to the environment.
AC4 Screenspace reflections off
AC4 Screenspace reflections on
One thing worth noting is that in this level - Abstergo Industries - walls had complex animations and emissive shaders on them and it was all perfectly visible in the reflections - no static cubemap could allow us to achieve that futuristic effect.
Ok, so this is a perfect technique, right? Nope. The final look in our game is effect of quite long and hard work on tweaking the effect, optimizing it a lot and fighting with various artifacts. It was heavily scene dependent and sometimes it failed completely. Let's have a look on what can causes those problem.
Well, this one is obvious. With all of screenspace based techniques you will miss some information. On screenspace reflections problems are caused by three types of missing information:
Ok, it's not perfect, but it was to be expected - all of the screenspace based techniques reconstructing 3D information from depth buffer have to fail sometimes. But is it really that bad? Industry accepted SSAO (although I think that right now we should already be transiting to 3D techniques like the one developed for The Last of Us by Michal Iwanicki ) and its limitations, so what can be worse about SSRR? Most of objects are non-metals, they have high Fresnel effect and when the reflections are significant and visible, the required information should be somewhere around, right?
If some problems caused by lack of screenspace information were "stationary", it wouldn't be that bad. The main issues with it are really ugly.
Weird temporal artifacts from characters.
I've seen them in videos from Killzone, during the gameplay of Battlefield 4 and obviously I had tons of bug reports on AC4. Ok, where do they come from?
They all come from lack of screenspace information that is changing between frames or changes a lot between adjacent pixels. When objects or camera move, the information available on screen changes. So you will see various noisy artifacts from the variance in normal maps. Ghosting of reflections from moving characters. Suddenly appearing and disappearing whole reflections or parts of them. Aliasing of objects.
All of it gets even worse if we take into account the fact that all developers seem to be using partial screen resolution (eg. half res) for this effect. Suddenly even more aliasing is present, more information is not coherent between the frames and we see more intensive flickering.
Flickering from geometric depth / normals complexity
Obviously programmers are not helpless - we use various temporal reprojection and temporal supersampling techniques , (I will definitely write a separate post about them! As we managed to use them for AA and SSAO temporal supersampling) bilateral methods, conservative tests / pre-blurring source image, do the screenspace blur on final reflection surface to simulate glossy reflections, hierarchical upsampling, try to fill the holes using flood-fill algorithms and finally, blend the results with cubemaps.
It all helps a lot and makes the technique shippable - but still the problem is and will always be present... (just due to limited screenspace information).
Ok, so given those limitations and ugly artifacts/problems, is this technique worthless? Is it just a 2013/2014 trend that will disappear in couple years?
I have no idea. I think that it can be very useful and definitely I will vote for utilizing it in the next projects I will be working on. It never should be the only source of reflections (for example without any localized / parallax corrected cubemaps), but as an additional technique it is still very interesting. Just couple guidelines on how to get best of it:
Also some research that is going on right now on topic of SSAO / screen-space GI etc can be applicable here and I would love to hear more feedback in the future about:
As probably all of you noticed, I deliberately didn't mention the console performance and exact implementation details on AC4 - for it you should really wait for my GDC 2014 talk. :)
Anyway, I'm really interested in other developer findings (especially the ones that already shipped their game with similar technique(s)) and can't wait for bigger discussion about the problem of handling indirect specular BRDF part, often neglected in academic real-time GI research.
|Jose Carlos Reyes|