The following blog post, unless otherwise noted, was written by a member of Gamasutras community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
A Gear VR game needs to run:
1) at 1440p
2) in stereoscopic 3D
3) at a consistent 60 frames-per-second
4) on a cell phone.
, even for a graphically simple game like Darknet
. When I first heard about Gear VR, Darknet was only barely pulling 60fps on a PC, and I was downright scared of the optimization challenge. Still, Oculus had shown me demos that proved the potential of mobile VR, and I knew that Darknet hypothetically ought to be able to run fast enough, considering its abstract visuals. So, I got to work.
Now, months later, Darknet is indeed hitting 60fps on the phone. I haven’t declared victory yet, since there’s no such thing as “too much optimization” on mobile; with more effort, I could still reduce battery drain, or stave off overheating, or get rid of the remaining performance hiccups. But the game is fully playable, and I thought it might be nice to share some details about how I sped things up.
Just about everything in Darknet glows. In the original PC build, this was accomplished through a post-process glow effect. Basically, the glowing objects would be rendered a second time, blurred, and then overlaid on top of the scene. This was easy to implement and looked good, but performance-wise, it was expensive. So expensive, in fact, that it was completely out of the question for the mobile version.
With 2D sprites, the solution to this problem is simple: you just add the “glow” to the sprite before putting it in the game. Unfortunately, that’s almost never an option for 3D game objects. You could add a glow to the object’s texture, but the glow won’t extend beyond the object’s surface, which looks clearly wrong. Because the object’s silhouette constantly changes based on the camera angle, there’s no way to complete the effect, unless you can somehow implement a fake glow sprite for every possible outline of the object.
Luckily, most objects in Darknet are roughly spherical, so the outline is always a circle! That means that I can simply draw a glowing 2D circle on top of the object to complete the fake glow effect. (Because the visible radius of a sphere changes based on distance from the camera, it turns out that I needed to do some fancy math
to adjust the circle’s scale, but it wasn’t too much trouble.)
There was also a nice side effect of using a custom 2D sprite for the object’s outline: I now have perfectly smooth outlines to my circles, which would otherwise be impossible without a lot of expensive anti-aliasing. On top of that, the “fake” glow effect also looked a lot cleaner than the “real” glow effect I had before. So, in this case, making the game faster also made it a lot prettier! Here’s an overall before/after screenshot, comparing the build from early March to the current version:
The most important 3D objects in Darknet are the “nodes” and the “links” that connect them. The nodes are all different kinds of Goldberg polyhedra
, while the links are dynamically generated curved meshes.
Originally, the links were almost perfectly smooth, but this required a lot of polygons. For the mobile version, I couldn’t afford that luxury, so I adjusted the links to be a little more geometrically simple. Since they were dynamically generated from the start, I could accomplish this by just changing a single variable before the links were built. The new links are a little more jagged, but not distractingly so.
The nodes were more troublesome. You’d think that, as abstract geometric shapes, they’d be pretty simple, but the more complex polyhedra actually required a lot of vertices and triangles. For a long while, I thought I could get away with keeping them, but I eventually admitted that they were just too expensive, and I started looking for an alternative.
I decided to try out simple spherical meshes with a tiled hexagon texture, which would reduce the game’s polygon count by about half. This was definitely not an ideal solution. I knew that this would look at least somewhat wrong, since it’s impossible to actually form a sphere entirely out of hexagons. The hexes at the top and bottom of the sphere would inevitably be stretched and ugly, but I hoped to minimize this issue by making sure that the spheres were directly pointed at the player most of the time.
To my surprise, the hex spheres worked out fantastically. The nodes don’t look perfect, but the flaws are subtle. Plus, because I was drawing the smooth outlines separately, the edges of the spheres (which were now much more jagged) weren’t visible at all. Best of all, it’s trivial for me to switch back to the smoother and more “correct” nodes and links for the PC version of the game. You can see the differences between the mobile and PC versions if you look closely (here's a bigger version
), but I don’t think they’re problematic:
Fewer Draw Calls
It was made very clear to me that one of the most important optimizations for a mobile Unity game was reducing the number of draw calls. This meant either drawing less stuff (which I really didn’t want to do) or else making good use of draw call batching, in which objects are grouped together and rendered all at once.
Batching is conceptually simple, but was surprisingly hard to get working. Unity will automatically try to batch objects into as few draw calls as possible, but it has a list of limitations
that derail this process. I ran into most of them. (Warning: this part gets a little more technical.)
- Unity can’t batch objects that use different render materials. In the original prototype of Darknet, I wrote code that adjusted the “renderer.material” property of all my objects, which inadvertently created a unique copy of the material for every object. As a result, the prototype used a new draw call for every one of the thousands of objects in the scene. Using “renderer.sharedMaterial” (or else adjusting vertex colors instead of material colors) solved this problem.
- I was also able to reduce the number of unique materials in the scene by using texture atlasing
. To make this easier, I just used 2D Toolkit
to automatically create atlases for all my 2D objects, like menus and outline sprites. Since they were all technically using the same texture, they could all be batched together.
- Unity also can’t batch meshes with too many vertices (300 or more, in most cases), so I went through all the meshes in Darknet and tried to make sure that they were batching-friendly. Aside from reducing polygon count, this was one of the big motivators pushing me to use simpler meshes for the node objects.
- For some reason, Unity can’t batch objects that have different uniform scales. That means that a group of objects with the same uniform scale like (1,1,1) can be batched, and objects with non-uniform scales like (1,2,3) and (3,1,1) can be batched, but groups of objects with differing uniform scales like (2,2,2) and (3,3,3) can’t be batched together. I was able to “solve” this by imperceptibly fudging the scale of the uniform-scaled object so that they were non-uniform, e.g. by adding (0, 0, Random.value * 0.05f) to the scale. This still seems like an incredibly silly thing to do (why would adding complexity make the game faster?) but it made the draw call count plummet. If you have a suggestion for a better way, I’d love to hear it.
- For transparent objects that share the same render queue
value, Unity will sort them by depth and then draw them in order. This can sometimes be a problem. Imagine that you have a hundred objects with Material A very far away, and a hundred objects with Material B very nearby; Unity would first draw the A group with one draw call, then draw the B group with a second call. But remember that Unity is first sorting them by depth and then drawing them in that order. If they were randomly distributed in space instead of being all-far and all-near, Unity would first draw some from A, then some from B, then some from A, and so on. This would require many, many more draw calls, since the different materials can’t be batched. It took me a long time to realize that this was the problem, but I was usually able to solve it by giving different render queue values to the different materials or by putting the materials together into a texture atlas.
I plan to continue optimizing Darknet on mobile, even after it’s released. I’m sure there are plenty of other things I could be doing to get the game running faster and more efficiently. But even though it’s far, far from perfect, I’m happy with how it’s shaping up. I don’t think of myself as an especially talented or experienced programmer, but when I compare Darknet’s current code to that of the old prototype from last year, I feel pretty proud of how far it’s come!