(This post was orignially posted on Ananse Productions site ).
(This post assumes you're familiar with C, Objective-C and Cocoa.)
Just like TV shows and films a video game's visual representation can be broken down into frames, single snapshots of the state of the game world at any given time. By showing these frames many times per second we give players the illusion of continuous motion.
The loop below is a high level description of what the game has to do each frame to draw the current snapshot.
//Basic game loop
Even though the iOS API doesn't make this clear there's a similar loop going on, we just have less control over it.
All iOS apps start with a call to
UIApplicationMain which kicks off a
NSRunLoop which for our purposes is a fancy event queue. Whenever something interesting but non-critical happens (input for example) it gets put on the back of this queue and processed in the order it's received.
So how do we emulate our loop above using this? We know with
UIApplicationDelegate we don't need
ProcessInput anymore. We can care of input events as soon as they come in with
touchesCanceled. In general, input is going to require some type platform specific callbacks so as long as we leave breathing room in our game loop for processing these we should be fine.*1
We know we want to calculate a new frame fairly frequently. Let's pull the number 60 times per second out of thin air for now.
//Bad iPhone game loop
- (BOOL)application:(UIApplication *)application
[NSTimer scheduledTimerWithTimeInterval:(1.0/60.0) target:self
selector:@selector(doFrame) userInfo:nil repeats:YES];
The above won't cut it. The resolution of an
NSTimer is 50-100 milliseconds. *2 1/60th of a second = 16 milliseconds. Our resolution, at best, is thrice the frequency at which we want our function to be called. There are also a host of other problems with using repeating
NSTimer's *2. We could try messing with the
NSObject performSelector family of functions but you're going into run into other problems, especially with our friend v-sync.
Monitors take a real (but very small) amount of time to draw the images on our screens. Furthermore, there's a gap between when we finish drawing one image and when we're ready to draw another.
The vertical refresh rate of a monitor is how often the monitor draws a frame per second. This number varies in different monitors in different regions. Since we're only concerned about iOS in this post, we're going to focus on 60 frames per second = 16 milliseconds (remember the number I pulled out of thin air) because that's the vertical refresh rate of iOS devices.*3
Vertical synchronization (v-sync) refers to waiting for the screen to finish being draw i.e. waiting for the start of a "Rest" state.
With this information, let's take a closer look at our Render function:
InitDrawing() waits for v-sync, if it hasn't happened yet, before letting us draw any objects.*4,*5. FinishDrawing() signals that we don't have any more objects to draw this frame. In between them we put our special sauce that tightens the graphics on level 3.
So what happens if we don't take v-sync into account when we draw? We'll be stuck inside of
InitDrawing() waiting for v-sync when we could be doing other useful things like dealing with touches. In fact, it was debugging the lagginess of the VoiceOver (Apple's screen reader tech) in my game that made me realize that I was stuck inside of
Note, even if we were in a perfect environment and able to call doFrame every 16 msecs, if it's not synchronized with vsync we'll still waste a lot of time waiting for it.
So how do we make sure doFrame gets called 60 times a second and we won't have to wait for v-sync? Fortunately,
CADisplayLink comes to our rescue.*8 This functions like the
NSTimer above but it has a good resolution and is timed with v-sync (sort of, we'll talk about it later).
//Better but not quite right yet.
- (BOOL)application:(UIApplication *)application
mFrameLink = [CADisplayLink displayLinkWithTarget:self
mFrameLink.frameInterval = 1;
[mFrameLink addToRunLoop:[NSRunLoop mainRunLoop]
If we can assure that every call to
doFrame will always take less than 16 msecs than the above will work and you go home and get back to coding. And dear God, blog to the rest of us on how you're doing that. However, if any of our frames take even a little bit longer you're gonna be in for a shocking surprise. Let's draw some pretty pictures.
Let's say we have a frame that takes 21 msecs (needed a little extra time to decode your awesome background music).
Frame 1: 0----21
Since we took longer than our alloted 16msecs,
CADisplayLink won't wait untl the 32msec mark to call doFrame. It will call it immediately.*9 It's easy think "that's ok. I normally only take 10 msecs so this will be straightened out by the next frame". But remember our section on v-sync. By calling doFrame right away we're going to be stuck in
InitDrawing() until the 32msec mark. So our timeline after the second frame will look like*13:
Frame 1: 0-----21
Frame 2: 21----42
Our second frame has to wait until the 32 msec mark before it can start rendering and then it will take 10 msec to draw. And remember, you passed the 32 msec mark while drawing this second frame which means
doFrame is going to be called immediately again.
Frame 1: 0-----21
Frame 2: 21----42
Frame 3: 42---58
The timelines above only take into account calling
doFrame. There's no breathing room for input processing which will make your app unresponsive and laggy until the storm settles. We know this will settle itself out because the amount of time spent waiting for vsync decreases with each frame, but the fact remains that one frame of suckiness can cause an avalanche of woe.
It's hard to track down if you don't know what you're looking for. Instruments will still tell you that you're rendering at 60 frames per second. But you'll see a lot of time spent in some combination of
usleep because you're stuck waiting for v-sync.
We could've put a quick end to this by enforcing that if we miss a v-sync we just suck it up and wait for the next one i.e. don't get run over chasing the bus you missed.
Frame 1: 0-----21
Frame 2: 32--42
This gives us the breathing room we need to still process input and makes sure that one errant frame doesn't screw us over in the long run. Fortunately, this is eerily similar to problems physics engines have run into and Glenn Fiedler has an excellent article that's the basis of how we're going to deal with this. *10
We're going to remember how long the past frame took and bank it.*11 Then the next time we call
doFrame we subtract 16 msecs from the bank. If the bank has no more time left in it then we know we won't have to wait for v-sync. Otherwise, we do nothing because we know we're being triggered for a v-sync that we missed.
Also note, the call to
fmod if we take longer than a frame. If we miss multiple v-sync (using a breakpoint for example) there's only one call to doFrame *9. If we didn't have this, we can get a huge amount of time our bank and we won't render until its all subtracted out, one frame at a time.
//Psuedo-code for what I'm currently using.
static double bank = 0;
double frameTime = mFrameLink.duration * mFrameLink.frameInterval;
bank -= frameTime;
if( bank > 0 )
bank = 0;
double elapsed = timer.ElapsedTime()*PlatformHiResTimer::sNanoSecToSec;
bank = elapsed;
if( elapsed > frameTime )
bank = frameTime + fmod( elapsed, frameTime );
This should take account sporadic frames being longer than usual. This isn't meant to protect against a stream of frames that consistently take longer that 16 msecs. If that's the case you need to lower your frame rate.
It took a fair amount of experimenting to figure this out. While this discussion focuses on iOS its meant to serve as a baseline for setting up game loops for any platform.
Note 1: If you're really set on delaying input handling you can use the -touches* functions to queue them until you call
ProcessInput. Be careful of lag, especially if you're game is supposed to work with VoiceOver( Apple's screen reader tech). The VoiceOver cursor relies on touch events being processed off of the run loop even if the events aren't passed down to the app.
NSTimer accuracy issues discussed here.
Note 3: Our final solution actually isn't dependent on knowing vertical refresh rate. It's just easier to discuss with a firm number in mind.
EAGlContext presentRenderBuffer returns before v-sync happens. It's the next render command that will be forced to wait for v-sync if it's called too early. A lot of the google results for
glClear being slow on the iPhone are probably related to not waiting for v-sync.
Note 5: A lot of pc/mac games have options to not wait for v-sync. This lets you render faster but can cause visual artifacts. On iOS we're stuck with v-sync. For a full discussion see this wiki article.
[EAGLContext setCurrentContext: sGLContext];
glColor4f( 0, 0, 0, 1 );
//You can also draw a screen filling black quad. This hasn't been
//a bottleneck for me.
Note: it seems like iOS always uses double buffering.
CADisplayLink is only available in iOS 3.1 and later. Reference doc.
CADisplayLink's behavior is actually more complicated. If your frame misses multiple v-sync events the selector seems to only be called once for all of the missed v-sync events. I also believe that the
CADisplayLink object's duration property is for the most recent v-sync event. Furthermore, the
CADisplayLink object's duration property can vary significantly from the time between calls to the selector even when we don't miss a frame. Performing the selector seems to be put off if the
NSRunLoop is busy when v-sync happens. The moral of the story: While
CADisplayLink gives us a much better resolution that a
NSTimer, calls to
doFrame aren't guaranteed to line up with v-sync.
Note 10: Glenn Fielder's article "Fix your Timestep!" can be found here.
Note 11: Glenn would use the term accumulator but I think "bank" is more intuitive.
PlatformHiResTimer is a helper class that's a wrapper around platform specific functions that give us nanosecond timing resolution. For the iPhone/Mac I use
mach_timebase_info which are mentioned in this Mac OSX Q&A
Note 13: For the timelines we're assuming
DoLogic() doesn't take a noticeable amount of time. It makes drawing the timelines easier. Note that without this assumption, you could get lucky and have the work in
DoLogic() push the call to
Render() past v-sync. But you're playing with fire as it's going to be very hard to make sure that happens on a consistent basis.
We could also call
DoLogic() to make sure we start rendering as soon as v-sync is done. However, we'll be rendering a frame behind the game. This can cause wonkiness, especially if we've processed input that's changed the state of the game but we haven't called
DoLogic to validate/fix it.
It makes more sense to me to always call
DoLogic() first. When
doFrame() takes less than 16 msecs it doesn't make a difference. When
doFrame() takes longer than 16 msecs I think the final version of
doFrame() handles it better than the trickeration needed to call
Render() first. I've seen a fair amount of game programming books advocate calling
Render() first so I might be missing out on something.