[Would you like to integrate YouTube video upload into your games? In a detailed technical feature with sample code, Team Bondi programmer Claus Höfele delves into the practical steps for your users to get gameplay footage automagically uploaded online.]
Increasingly, YouTube integration is seen as a valuable
feature addition to games. Spore Creature Creator, for example, offers to
place a video of your creature on YouTube.
At the same time, a special YouTube
channel aggregates the videos created by the community and offers additional
informative material about Spore. This is a great way to encourage user created
content and create a community for a game.
Other current games with YouTube support include PixelJunk
Eden and Mainichi Issho -- both allow you to upload recorded video footage of
your game performance. And, not only commercial games benefit. By hosting the
videos, YouTube puts this feature in reach of indie game developers who might
otherwise not be able to afford the server resources.
As a game developer, recorded gameplay offers you a
variety of possibilities to engage the player:
- Video
as an achievement: the player can replay successes or win a video for a
special accomplishment, such as a high score.
- Video
as a community building feature: the player can share funny and interesting
mementos of gameplay or display self-created content.
- Video
as a learning device: the player can see how others play, watch in the
enemy camera who killed him/her, or share game tips with others.
This article shows you how to add a video recording
feature to your game and share the recording on YouTube. It includes a demo application and source code that can be downloaded here.
The demo that comes with this article consists of a C++
implementation that extends the Blobs sample application from the DirectX SDK.
Whereas the demo runs only on Windows, the library to encode and upload videos
to YouTube is cross-platform and it should be easy to integrate the techniques
demonstrated in this article into your own game. Figure 1 shows a screenshot of
the demo.
Recording Gameplay
In most cases, you'll want to have the recording running
all the time during gameplay to ensure that the player doesn't miss any
noteworthy events. You can offer the player the option of selecting scenes from
the history of recordings (say, the last 10 minutes) or automatically store a
video when important events occurred (such as a goal in a soccer game).
Alternatively, you can ask the player to start recording explicitly if the game
lends itself to it. Spore Creature Creator, for example, asks you to start
the recording by pressing a button in Test Drive mode to create a video.
A straightforward option to record gameplay footage is to
capture the contents of the framebuffer in regular intervals during gameplay
and encode these screenshots into a video. While simple to do, transferring the
framebuffer contents from the graphics card to system memory for each screenshot
isn't a free operation.
Also, you have to encode the screenshots in real-time,
which takes a big chunk out of a game's frame time and requires regular hard
disk accesses to store the video frames. Console game developers should look to
see whether their system provides hardware-specific APIs that speed up this
process. PixelJunk Eden on the PS3 is an example where recording the
framebuffer has been used to good effect.
While the previously mentioned technical challenges can be
acceptable for some games, more problematic is the fact that framebuffer
captures restrict the editing options that you can apply to the recordings. Say
you are developing a car racing game and would like to allow the player to
choose a different camera view in the video (first person, cockpit view, third
person) than that used while playing the game.
Halo 3, for example, implements
a feature called Saved Films that allows you to record gameplay and change
the playback speed, camera angle, and display options when watching the recording. (Halo 3 doesn't allow you to turn the recording into a video and upload it
to YouTube, however.) If you want this level of control, you need to record the
game state and render the video frames in a second step.
The recorded game state must comprise everything you need
to recreate a rendered frame at a later time. This will include the state of
your player's avatar, AI controlled characters, and much more data, depending
on the particular type of game.
The integration of such a recording system can
be complex, but you can simplify your task a little if you forgo pixel-perfect
recreations of the gameplay. It's not important that every particle is at the
same place as before, but the effect should be at the same position.
You might already have a similar system in place for your
saved games, which also requires you to store the current game state and
recreate it at a later time. Same as saved games, you will have to worry about
recorded game state being compatible with different versions of your game.
Because recording of game state can result in a large
amount of data, an alternative approach is to record the input that caused the
game state. If you capture all button presses, joystick movements, and the
system clock, you can use this input to recreate a frame.
This requires a
deterministic game engine: one that produces the same outcome for each run. A
deterministic game engine is no easy feat, but such a system will produce a
minimal amount of data in each frame. Interestingly, you can use a deterministic
game engine to help fix bugs by recording the input while playtesting and
replaying the input to reproduce the bug (see this link and this link).
A combination of game state and
input captures might work too: capture the game state in regular, but large,
intervals (say every 300-600 frames) and record the input in-between those game
state updates.
Because game state and inputs are very specific to your
engine, this article's demo goes the easy route: read back the framebuffer
contents in regular intervals, scale it to a suitable size, and send the image
to the video encoder. The demo uses IDirect3DDevice9::StretchRect() from the
DirectX SDK, but OpenGL aficionados can use glReadPixels()
instead.