Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
September 3, 2014
arrowPress Releases
September 3, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Cover your assets: The cost of distributing your game's digital content
Cover your assets: The cost of distributing your game's digital content
October 24, 2012 | By Colt McAnlis

October 24, 2012 | By Colt McAnlis
Comments
    1 comments
More: Social/Online, Indie, Business/Marketing



In this reprinted #altdevblogaday in-depth piece, Google's game developer advocate and Blizzard veteran Colt McAnlis shares concepts and strategies for distributing your content online yourself.

Today's game developers have a wide choice of digital distribution platforms from which to sell their products. When a game is very large it's often necessary to deliver some of its content from another source, separate from the distribution service. This can be new territory for those of you who are unfamiliar with web development.

In this article, we'll discuss some of the concepts and strategies that will help you decide how to distribute your content on the web. It is well worth the effort to distribute your digital assets yourself because it gives you the chance to make your users happier.



When thinking about digital delivery, the two main factors to consider are time and money: The time it takes the user to download content and start playing, and the dollar cost to you, the developer, to deliver the bits. As you might suspect, the two are related.

Measuring the cost of time: bandwidth and latency


We all know that users hate long load times, which is already known to be a large factor with the success of websites. In order to better understand the issues, let's take a look at some results from running speedtest to measure download times from various servers around the world to a public computer in a San Jose library.

Using the measured download speeds, here are the times it would take to download 1GB from various locations:

Server (City, Country)Download SpeedTime to download 1GB (minutes)
San Jose, USA20mbps6.8
Maine, USA15mbps9.1
Lancaster, UK2.25mbps60.7
Sydney, AU2.22mbps61.5
Madrid, ES1.93mbps70.7
Beijing, CN0.8mbps170.6

Depending on the location of the server, it can take from about 7 minutes to almost 3 hours to transfer the same amount of data. Why is there such a variation in download time? To answer the question, you must consider two terms: bandwidth and latency.

Bandwidth is the amount of data that can be transmitted in a fixed amount of time. It can be measured by how long it takes to send a packet of data from one place to another. Developers have some control over bandwidth (for example by reducing packet size or the number of packets transmitted), but bandwidth is ultimately controlled by the infrastructure between the user and the distribution service and the service agreements that each has with their respective ISPs.

For instance, an ISP may offer different tiers of bandwidth service to users, or may throttle the bandwidth based upon daily usage limits, and the "last mile" of copper – or whatever the connection is to the user's platform – can also cause bandwidth to degrade. (i.e. A chain is only as strong as its weakest link.)

Latency, measures the time delay experienced in a system. In networking parlance latency is often referred to as round trip time (RTT), which is easier to measure since it can be done from one point. You can imagine RTT as the time it takes for a sonar ping to bounce off a target and return to the transmitter. The familiar Unix ping program does just that.

As a developer, you have little control over latency. Your algorithms might contribute some overhead that will accrue to latency, but the physical realities of the distance between the transmitter and receiver and the speed of light impose a hard lower bound on RTT. That is, the physical distance between two points across a specific medium (like copper wire or fiber cable) caps how fast data be transmitted through it.

At first glance, you might think that you can buy your way to faster content delivery by purchasing higher bandwidth connections. This might work if you knew exactly where your users are located and they don't move; however, as Ilya Grigorik points out, incremental improvements in latency can have a much greater effect on download times than bandwidth improvements. (His argument relies on some interesting work by Mike Belshe who also makes the same case.)

The simplest and most popular way to reduce latency is to minimize the distance between the user and the content server using a Content Delivery Network, or CDN.

Attacking latency with locality

Content Delivery Networks duplicate and store your data in multiple data centers around the world, reducing the amount of time it takes to fetch a file from an arbitrary point on the globe. For instance, I may have originally uploaded a file to a CDN server that happened to be in San Jose, but a user downloading the file from Beijing would most likely receive the file from a server in China.

When you use a Content Delivery Network you take advantage of one of the basic features of internet architecture: The internet is, at its core, a hierarchy of cached data. YouTube is an excellent example. Once a video is uploaded, it is distributed to the primary YouTube data centers around the world, avoiding the higher cost of sending the file from the originating data center no matter where the request is coming from. Google Cloud Storage also uses a similar policy. In high-demand areas multiple intermediate caches can exist. For example, there may two additional data centers in Paris.

You may not be aware of another hidden efficiency of the net: Client machines usually cache data for faster retrieval later, but data can be cached by an Internet Service Provider (ISP) as well, before sending it on to the end user – all in an attempt to reduce the cost of data transfer by keeping the bits closer to the users.

Attacking latency with technology

Some CDNs provide advanced transfer protocols that can speed up delivery even more. Google App Engine supports the SPDY protocol, which was designed to minimize latency and overcome the bottleneck that can occur when the number of concurrent connections from a client is limited.

A CDN can also offer flexibility in controlling access to your data. Google Cloud Storage supports Cross-origin resource sharing (CORS), and Access Control Lists, which can be scripted. These tools can help you tailor the granularity of your content, pairing specific assets with specific kinds of users for example. Google App Engine is scriptable. Scripting can help you increase the security of your online resources, for example by writing code that detects suspicious behavior such as an unusual barrage of requests for an asset coming from multiple clients.

Using a CDN allows you to scale and deliver data to your users around the world more efficiently and safely.

A note on mobile content delivery

Mobile networks have additional problems involving latency and download speed. Ilya Grigorik does a great job of explaining this:
"The mobile web is a whole different game, and not one for the better. If you are lucky, your radio is on, and depending on your network, quality of signal, and time of day, then just traversing your way to the internet backbone can take anywhere from 50 to 200ms+. From there, add backbone time and multiply by two: we are looking at 100-1000ms RTT range on mobile.

Here's some fine print from the Virgin Mobile (owned by Sprint) networking FAQ: Users of the Sprint 4G network can expect to experience average speeds of 3Mbps to 6Mbps download and up to 1.5Mbps upload with an average latency of 150ms. On the Sprint 3G network, users can expect to experience average speeds of 600Kbps – 1.4Mbps download and 350Kbps – 500Kbps upload with an average latency of 400ms.

To add insult to injury, if your phone has been idle and the radio is off, then you have to add another 1000-2000ms to negotiate the radio link."
So for you mobile developers, be aware of these issues when trying to get data to the user fast. Be sure your streaming and compression systems are designed to compensate for these extra burdens.

The cash cost of content delivery

You must spend some money to use a CDN. (Sadly, the free lunch comes next Tuesday.) For example, Google Cloud Storage charges around $0.12 per gig for the first 1 terabyte of data transferred each month.

To put that in perspective, let's say your game sees 3.4 million unique users monthly. Assuming your in-game content is 1GB in size, and Google cloud storage charges about $0.085 per gig to transfer 0.66 petabytes a month (the pricing tier at that usage level), then your cost would be about $9,633 per day.

Putting things in perspective, to break even you'd need to earn about $0.002 per user per day to distribute that much content. Hopefully, if you've got 3.4 million monthly users you should easily be able to do that.

Admittedly these are worst-case numbers; the chances that you're serving 1GB of content to 3.4 million unique users daily is a bit far-fetched. Within a few months everyone on the planet would have your content; so this scenario doesn't represent a long-term estimate.

Knowing is half the battle

Once you're aware of the time and financial costs involved with distributing assets, you can plan a path to do something about it.

Only send to users what they need (right now)

Up until now, we've assumed that all 1GB of content is required for each user to start the game, but that's a great, horrible lie. In reality, the user may only need a subset of data to begin play. With some analysis you will find that this initial data can be delivered quickly, so the user can start to experience the content immediately, while the rest of the data can be streamed in the background, behind the scenes.

For instance, if a user downloads the first 20MB from a website or digital software store, can they start playing right away and stream in the rest later? How long until they need the next 20MB? What about next 400MB? Would a CDN be able to deliver the follow-on content in a faster, or more flexible manner? Optimizing for this sort of usage can decrease the perceived load time and the overall transfer cost, enhancing the accessibility and affordability of your product.

Own your content updates

In the current world of game development, it's common to run on many different platforms. When an update is available, it takes time to be sure that the new content has been received by all your users.

For instance if you've pushed a new build of your game server, some of your players can be out of sync for an extended period, which can generate lots of "OMG th1s g4m3 duznt werk!" bugs for your QA testers. Controlling how and when your app performs updates can be highly beneficial – though it must be noted that in some cases the updating logic is managed by the operating system and is out of your hands. Whenever possible, your app should be aware of new updates and capable of fetching them.

Most applications will contain some number of platform-specific assets. For instance, the types of hardware-supported texture compression formats can vary by platform, you may need a separate tier of lower-resolution models to run on mobile, or some of your content can vary by region. When any of these assets change, the associated update need not be universal. If you can segment at least a part of your content by platform and location you can better control when you need to update, who needs to update, and what you need to send.

Moving forward

To be fair, the distribution strategies we discussed here are not for everyone. If your entire app is under 10mb, there's little need to segment assets or distribute content outside of the primary distribution point.

But if your app is hefty, it pays to understand the costs involved with distributing your game's digital assets, and how you can reduce those costs – and the users' costs as well. It's also wise to consider how a distribution strategy can decrease load times and reduce headaches caused by updates. By taking control of distribution you have the ability to save money and increase the quality of the end-user experience.

Which is the point, right?

[This piece was reprinted from #AltDevBlogADay, a shared blog initiative started by @mike_acton devoted to giving game developers of all disciplines a place to motivate each other to write regularly about their personal game development passions.]


Related Jobs

Sony DADC
Sony DADC — Culver City, California, United States
[09.02.14]

Software Engineer – App and Game Development
Cloud Imperium Games
Cloud Imperium Games — Santa Monica, California, United States
[09.02.14]

Gameplay Programmer
Big Fish Games
Big Fish Games — Seattle, Washington, United States
[09.02.14]

Engineering Manager- Studios
Cloud Imperium Games
Cloud Imperium Games — Santa Monica, California, United States
[09.02.14]

3D Artist










Comments


Brett Harris
profile image
Great article!

The costs listed in the article are one facet of the cost of doing business digitally. The other is preparing and publishing the actual product digitally.

One big money sink is the cost of branching a digital build for various services. For example, your company may have a proprietary digital distribution service, such as EA's Origin client, or Ubisoft's Uplay PC that may or may not require a specific build format. If this build is unique to the proprietary service, then a new build needs to be created to be sent to third party digital retailers, such as GamersGate, Direct2Drive, Amazon, etc. Finally there are "premium" digital retailers, such as Valve Software's Steam or OnLive that require special build formats that require additional development resources. With Steam, this investment can make good business sense, but with smaller premium retailers (such as OnLive and other streaming services) this may not be the case.

The total cost for additional development/testing/etc can vary radically based on the complexity of the product, as well as the infrastructure needed to work with external partners (managing product code allotments with each retailer for example) and long term support (patches/DLC may not be compatible between SKUs). This does not even include the various contract limitations and fees taken out by these third party digital retailers.

It can be a mess at times, but it is the wave of the future.


none
 
Comment: