Gamasutra: The Art & Business of Making Gamesspacer
The Saturday Paper - Exhausting
Printer-Friendly VersionPrinter-Friendly Version
View All     RSS
April 16, 2014
arrowPress Releases
April 16, 2014
PR Newswire
View All
View All     Submit Event





If you enjoy reading this site, you might also want to check out these UBM TechWeb sites:


 
The Saturday Paper - Exhausting
by Michael Cook on 01/04/14 11:22:00 am   Expert Blogs   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

Screen Shot 2014-01-03 at 17.00.03

When programming systems it makes sense to keep complexity and scale in mind. Don't try and render 400,000 sprites all at once. Don't try and send the entire world state to every player on the server. What about our design tools, though? Are we being too cautious when it comes to coding, and what riches might we be able to access if we jumped in the deep end from time to time? This week on The Saturday Paper: the power (and responsibility) of computing everything at once.

We're reading An Argument for Large-Scale Breadth-First Search for Game Design and Content Generation via a Case Study of Fling! by Nathan Sturtevant. In it, Nathan describes a system he built for exploring the mechanics and levels of an iOS puzzle game called Fling! Unlike most tools that might be used for solving or assisting in the design of puzzles, Nathan's approach uses brute force search and parallelisation to explore the space of possible puzzles. He raises some intriguing points about brute force approaches and their application to procedural content generation - something I think could be really relevant to a lot of developers right now.

Screen Shot 2014-01-03 at 17.06.24

Let's start by talking about Fling! The screenshot above is from the game - an 7x8 grid on which balls are placed. You can swipe a ball in a direction, and it will move until it collides with another ball, which will be pushed in that direction while the original ball stops moving. The objective is to push all but one of the balls off the screen. In the game the number of balls in a single level goes as high as 14, but as Nathan points out this is a fairly arbitrary limit. In theory, there are over 5.8 trillion potential ways of arranging balls on that 7x8 grid - just storing a single bit to record if each level was solvable would take 675GB of space.

Of course, we're not interested in all of the possible combinations, just ones which are solvable. Still, that must be a huge number, right?

Beginning at boards with just one ball in (all of which are successful game states by definition) Nathan wrote a program that built up a database of solvable Fling! levels. In order to generate solvable levels containing n balls, you can start with all solvable levels containing (n-1) balls and add a single extra piece to them, then check if they can be reduced to solvable levels in a single move. This elegant bit of calculation means that Nathan's code took just two hours to calculate all solvable levels of 10 pieces or less - faster than it took to write the code in the first place, as he points out. This forms an endgame database, which is a key part of the tool described in the remainder of the paper.

Screen Shot 2014-01-03 at 17.09.09

By loading the endgame database directly into memory (the work described in this paper uses the 9-piece database for speed, but there's no reason why larger numbers of pieces wouldn't work) Nathan was able to develop a tool that could represent Fling! game states and then answer the following questions:

  • How many states can I reach from this game state?
    That is, how many different moves can I make in total in this level, exploring all the way through to all failed and success states. We'll come onto why this is useful later, but this can be a big number as the levels get bigger.
  • How many starting moves lead to a solution?
    This can be calculated with help from the endgame database. The tool annotates the level with little white arrows showing these starting moves - incredibly helpful for a designer wanting to visualise the solution space.
  • What effect would single changes have on the board?
    When you present a Fling! level to the tool, it checks each of the 56 tiles on the board and either adds or removes a ball to it, then rechecks the board. Using this data, it can shade tiles that would either make a solvable level unsolvable if it were changed, or vice versa.

What you end up with is a tool that can show you how many potential starting moves a puzzle has, how to fix (or break) a puzzle, and also tell you how big this puzzle is in terms of possible moves. Why is that last point a big deal? Here's a graph showing the level number (on the X-axis) versus the total number of states (on the Y-axis, logarithmic) in the original Fling! game:

Screen Shot 2014-01-03 at 16.35.34

In other words, the number of states in this level's search tree seems very closely related to what the game's designer thought of as difficulty. Which makes sense - the more wrong alleys there are to go down, the more potential there is for a player to trip up. What if they aren't all wrong alleys? Nathan's already ahead of you there. By analysing Fling!'s levels, he found that every level was designed to have a single valid solution path. That is, there is only ever one right move to make on the board (Nathan tells me there are some interesting conclusions you can draw from this about certain kinds of moves, given that there's only ever one correct move - he's looking into these at the moment).

What's important to take away here is that both the endgame database and the tool's analysis of level designs is done more or less exhaustively - there are no search heuristics guiding the tool towards finding levels it's been told might be good, or moves it's been told might be productive. Instead, it searches all possible moves, it explores all possible routes through the search space, and as a result you find that actually that space wasn't so scary after all. And once you have that complete archive of every possible level in memory, a human designer can sit down and really get to grips with a level design.

Screen Shot 2014-01-03 at 16.53.51

You might think that this is all well and good for grid-based games with small state spaces. And it absolutely is. There are plenty of people working on incredible grid-based games with small state spaces, and even when that state space increases just a little bit we may still find it tractable to store endgame databases and exhaustively explore the design space. All the work described in the paper uses parallel processing wherever it can, since individual game states are unrelated to each other (there's no data to bear in mind like health). Even this could be added to the state representation rather than holding it in data. A game like Corrypt has very few resources - each level could be processed a few times with differing resource levels embedded in the game state.

The real importance of the paper is not the specific problem the tool solves, though. It's the rewards that Nathan found by attempting to exhaustively solve a problem instead of tiptoeing through it with the usual applications of A* search and optimisation techniques. Vast memory reserves, multithreading, cloud computing - we have a lot of resources at our fingertips these days, and that should make us reconsider the use of brute force techniques. What was once a one-way ticket to slowdown and memory overflows could now be a powerful new way of looking at game design.

Where To Find More

Nathan is currently an Assistant Professor at the University of Denver. He presented this work in a fantastic talk at the AI in the Design Process workshop at AIIDE 2013, and really impressed me with the results in the paper. I hope I got some of it across! Do get in touch with him if you want to talk more. If you're at the AI Summit at GDC this year, Nathan will be talking about Monte Carlo search - he's a great speaker who's done a terrifying wealth of work on search techniques, so I recommend checking the talk out!

If you're interested in the work in relation to Fling! then apparently Nathan's working on a new level pack for Fling! based on some of the ideas this tool started from, and mixing in some automated design as well. I'm really excited to see more about this - perhaps another Saturday Paper in the future...

As ever, you can get in touch with me via Twitter with feedback on this week's column. Ludum Dare results next week - don't forget to check in at my site for updates on how ANGELINA performed in its very first game jam.


Related Jobs

Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States
[04.16.14]

Production Coordinator (temporary) - Treyarch
Vicarious Visions / Activision
Vicarious Visions / Activision — Albany, New York, United States
[04.16.14]

Senior Software Engineer-Vicarious Visions
Telltale Games
Telltale Games — San Rafael, California, United States
[04.15.14]

Senior Platform Engineer (Xbox One & PS4) @ Telltale Games
Nintendo of America Inc.
Nintendo of America Inc. — Redmond, Washington, United States
[04.15.14]

Associate Software Support Engineer






Comments


Eric Cassells
profile image
Hey could we get a look at Nathans code?

Michael Cook
profile image
I'm not sure if source is available! I'll try and get in touch with Nathan and ask him.

Nathan Sturtevant
profile image
Hi Eric -

The parallel breadth-first search code is here:

https://code.google.com/p/hog2/source/browse/trunk/apps/fling/Driv er.cpp#265

It's part of a large code base which is full of lots of different search algorithms, but it isn't as well documented as it could be. It's also still changing, so what is sitting there isn't quite what's in the paper. (I tried out using mmap to improve loading speed, but the results were mixed.) So, the line number is valid now, but might not be in a few months.

If you have questions about it, feel free to let me know.

Ron Dippold
profile image
I recently worked on a project that required optimization of firing ten rows on a non-regular printhead in any order - which order produces the best looking (most even) pattern for a given paper speed?

I was thinking about various ways to minmax this, set it up as linear solving, etc when I realized that 10! is just 3.6 million permutations. I just simulated dot placement for every possible order and spit out the ones with the best patterns. Even in python it only takes 20 minutes to run, it's far less prone to error than anything trickier, and I know that I didn't accidentally prune a branch that had a promising order in it.

Of course if the head had two more rows it'd end up being 44 hours. And at 14 rows it entirely breaks down (8008 hours) unless we start using the parallelisation discussed here - and this would be embarrassingly parallel ( https://en.wikipedia.org/wiki/Embarrassingly_parallel )

Michael Cook
profile image
Haha, nice example! And a cool application of the idea. Did you get what you wanted out of the python run in the end?

Ron Dippold
profile image
Yes, it works great. And since adding a new print mode is rare, the 20 minutes is nothing.

Steven Stadnicki
profile image
Having brute-force available certainly makes the approach more effective, but for me the key takeaway here is that anyone who writes this sort of logic puzzle should invest the time to write an exhaustive solver for it. It helps act as a check on your human-designed puzzles, to make sure that there aren't any inadvertent cooks to your intended solutions; and it opens up the possibility for doing either this sort of exhaustive enumeration or even a guided random exploration to create an unbounded number of computer-designed puzzles. In some cases, it's even possible to build a (still-exhaustive) solver that mimics common human approaches to solving a puzzle (e.g. Sudoku) and use it to give auto-generated puzzles a rudimentary degree-of-difficulty that goes beyond static metrics like 'number of givens'.


none
 
Comment: