This blog is a repost of the original text on my site with some modifications
Symbols are the language of video games
If a player sees a red bar near his or her character that has no instructions or label, it is assumed this represents "health" and decreases with damage taken. A blue bar near the character that has no instructions or label is assumed to represent "mana" -- the currency used to cast magical spells. If the cursor turns into a finger or a gear, it means the moused-over object may be clicked in order to interact with it. Question marks or stylized icons means an NPC has something to say, and exclamation points means that NPC is waiting on you to advance his state. When you spend points in a tech tree, it unlocks the next node to be purchased which lights up. And this goes on.
When we play a game, we subconsciously (or consciously) recognize all of these things. We call good games "intuitive" and bad games "confusing" but really this is a matter of the designer's fluency in the language of games, and also the designer's ability to create novel language for new features that is immediately understood by players. Good writers add to the existing body of language, and good designers invent new symbols which add to the semiotic lexicon.
On a recent play-through of Skyrim I walked into a cave and saw a dead body on the ground, three stone pillars, and a locked gate. Semiotics told me a dead body on the ground means the corpse will have a book with a riddle that tells me how to rotate the pillars and unlock the gate.
I read the riddle, and was stumped.
My girlfriend, who is not a gamer and had never seen Skyrim before, was watching me play. She said "oh the pillar next to the water gets the fish." My gamer brain had completely discarded the background as for-atmospheric-purposes-only, but she was free from my prejudice.
One challange sandbox games a graphical fidelity improves each generation is that there is so much detail painted into a scene the player has no idea what's important. Whereas in earlier RPGs with less sophisiticated graphics, the difference between a "background tile" and a "foreground object" is clear. The limited color palettes and resolutions made it necessary for designers to highlight the semiotic elements.
I never completed Skyrim.
What happened is this: I could be head of the Mages Guild, and head of the Assassin's Guild, and head of a mercenary's group, and get the title of Thane from every lord in the land (even the ones in direct opposition). Once I did a quest for a goddess of light and became her avatar, and then I did a quest for a demon and became his agent. Nobody seemed to question my allegiance, in a game whose main storyline deals with a large regional conflict with many sides and players. When I realized my choices weren't meaningful, I stopped playing. This is despite finding the game a generally fun experience.
Later I purchased the DLC thinking it would get me back into the game, because I geniuinely did enjoy the moment-to-moment gameplay. So I progressed along the DLC quest-line and had the choice of becoming a vampire hunter or a vampire. I knew that once I decided I couldn't go back -- I knew that the choice was meaningful. Was this the remedy?
In short, no.
The game's symbols didn't give me enough information about the choice, and I couldn't decide. After a lot of research on the web, I got bored and moved on to a different game.
I'm impossible to please, right? I'm not happy with meaningless choice, and I'm not happy with choices that have consequences. This got me thinking.
If you break it down, there are a couple of moving parts here: whether the player knows a choice is being made at all, whether the player is aware of that choice, whether the choice is irreversible, and whether the player feels informed enough of the consequences to be confident a good decision can be made (based on that player's playstyle).
The most apparent choice in an RPG is during character creation -- what class is selected during the creation process, how the points are spent on attributes and skills, what feats are selected, and so forth. And for the most part there is an extremely entrenched convention, so RPGs have a default design that works. It is the mechanics of choice during gameplay that widely differ.
The Age of Conan MMO is an interesting case, as it had some fantastic writing that received critical acclaim. But the design of the game was such that the dialogs didn't matter to gameplay. While the player could select dialog option 1, 2 or 3, there is largely no meaning in these choices. It becomes a non-choice. Do you want vanilla or vanilla?
But let's go back to the Skyrim example.
You are the Dragonborn! The game spends a lot of effort making the player feel special. The character is a unique individual with unique powers that has been selected by fate to save the world. One frequent in-game event is a dragon attacking a town, and the player must "rescue" the town by killing a dragon. Or suffer the consequences of NPCs that offer quests potentially dying (but not the ones critical to main questlines).
So the player kills the dragon in plain view of NPCs guards and all the town-folk, and then the player eats the dead dragon's soul.
But when the encounter is over those very townfolk and guards that were just saved might say something asinine like "Maybe I'm the Dragonborn, but I just don't know it yet." Or they'll warn the player not to steal things, or otherwise treat the player like the same random transient he or she was right before the dragon was killed.
Notice the dragon corpse in the background this guard helped me kill five minutes prior:
Contrast this to the water behind the pillar in the puzzle example, which most certainly did inform gameplay. That small detail was critical to the progression in the dungeon. So how is the player's brain supposed to know what's relevant to the gameworld?
I want to compare a design mechanic in Fallout 2 to the corresponding mechanic in Fallout 3 and NV to illustrate a point.
In Fallout 2, as a player I know that if my character doesn't have enough Perception I might not see a dialog choice. Basically, I don't know what I don't know. In Fallout 3 and Fallout NV, the game informs me every time a given skill check was being made in dialog, even if wasn't skilled enough to select that option. I found Fallout 2's method to be a better design choice.
Both in Fallout/Fallout 2 and Fallout 3/NV there is an up-front choice presented when you level -- which skills do you deliberately increase? Later each game world presents additional options when specific skills are checked within the game mechanics. If the player has a high lockpick, he or she may open difficult safes or locked doors; if the player has a high survival skill certain foods may be created, and so forth. When skill checks occur is clearly telegraphed by the game, so players can comfortably predict what the skills do, meaning they can know the consequences of the choice to invest points in lockpick over survival.
What's not clear is when and how these same skills are checked in conversation, because these checks are arbitrary. And clashes with instances where the same language is used elsewhere in the game -- if I want to pick locks and open safes, I take lock-picking. If I don't care about picking locks, I spend points in a different skill. But what if I'm in a dialog with an NPC whose deceased brother happens to be a thief (which I didn't know before I entered the conversation), and if I want to convince her to help me I have to pass a lockpicking check. Because some designer said "let's check the lockpick skill here, that kind of makes sense?"
It's as if you travel to a foreign country that presumably speaks English, but every word means something completely different and there is no dictionary.
This conflict of use is further exacerbated by the nature of the sandbox genre where the player can (and will) overturn every rock, because maybe they'll be something hiding underneath it. Because the conversation system dangles the carrot of "what's behind this door" and makes the player feel bad for not being omniscient that this conversation will check that skill. Players don't like to be punished arbitrarily.
If you want to give a kid an apple, give him an apple. Don't give him a choice between an apple and candy and then say "Sorry, you can't pick candy because it's after 10 o'clock and the stores are closed on Tuesdays, plus you didn't do these 5 chores I am only now mentioning."
An additional downside of this disparity of choice is that sometimes a consequence will appear in the late-game based on something the player unknowingly does in the early-game, which can be very frustrating.
One insidious trend I see in modern game direction is the game attempting to convince the player that a meaningless choice is meaningful, so that the player feels good for picking the "right" option (when both options are equally meaningless). Because games are increasingly designed as psychological reward systems.
Everyone is a winner, and everyone picked the right options, and here is your achievement. Don't you feel good about yourself? Well, if you want to feel good about yourself again, buy the sequel.
That's the con -- not using semiotics to inform the player, but instead to psychologically manipulate reward centers. We crave achievements, even for not actually doing anything, because we convince ourselves that because we got the achievement we must deserve it. This is human nature.
The cynical designer can allow everyone to easily win, and make it feel like this was some great accomplishment. Instead of calling them manipulative, the industry labels such games as "accessible."
Another con is to offer red-herring "false choices." These are choices that aren't really choices -- do you want vanilla ice-cream, or to be punched in the throat? Players feel smart for picking ice-cream, and players like games that make them feel smart.
The player choice paradox is this:
The paradox is players want choice, and then want the thing which makes choice irrelevant. This isn't every gamer, but in my experience as a designer it's a fair amount who play RPGs. Even -- perhaps especially -- those that don't want to admit it.
So what's going on here?
Narrative in games is still a trigger and state-based affair.
Let's say there are two NPCs -- Bob and Mary. If the player talks to Bob before Mary, it advances a game state and Mary now has a new dialog. If the player talks to Mary before Bob, it advances a different game state and Bob gives a different dialog than he would if the player talked to him first. If the player walks within 10 meters of Bob (specifically a bounded area that encompasses Bob), he will walk over to the player using dynamic pathing. While if the player is wearing a certain hat, Mary will attack.
These things are typically implemented by designers in a high-level scripting language, after coders have added the functionality to the game editor. Note that quests are just game states which have been flagged to show up in the journal, with all the surrounding icons and text that are also created by the design team.
At a rough count, Skyrim has about 1200 named NPCS. Someone has to dress them, give them an inventory, dialog, whatever. NPCs like "Falkreath Guard" would be cloned to save a lot of time. They're given stock phrases to use when the player click on one-- things that make sense in any situation. If the player has a particular game state set (he's the Archmage, for example), or a given skill passes a certain threshold, or a particular weapon type is equipped, there will be additional stock phrases added to the pool.
"Light Armor means light on your feet. Smart." -- any guard, anywhere.
This is how it worked in the original Fallout. And indeed, pretty much every RPG from the 1990s. That's right, the fundamental paradigm for quest and world-building hasn't drastically changed for twenty years.
The fundamental challange is that as world sizes grow, and the number of actions the character can perform grows, the number of possible connections between every NPC and every other NPC, and the player, also gets very large. And designers are still mostly implementing these connections by hand.
So there is a fundamental conflict between traditional RPG narrative and a sandbox open world:
As world sizes, NPC count, and player freedom increase, the percentage of the player's actions which register as meaningful to the game necessarily decrease.
In short, the semiotics becoming increasingly dissonant with the gameplay because the semiotics of go-anywhere-do-anything sandbox RPGs is about choice and permanence.
This in turn is simply a function of how games are made. Skyrim is a huge world, and if the Internet is to be believed, it took about 90 developers and $85 Million. MMOs have the added constraint of multiplayer completely breaking traditional quest logic. Which is why phasing, and similar work-arounds were invented. Basically, MMOs have devolved into single-player games to protect quest logic, and single player RPGs are decreasing the meaningful choice as the world gets bigger and the player can do more and more of everything.
This is the part of the article where the author usually advocates The Solution(tm) and then tries to sell you something. He or she might say something vague like "well, what we really need is more AI generated non-linear storytelling," and then just casually slip in there that his or her current project is doing precisely that.
But there is a massive gulf between buzz words and implementatoin.
When you really get down to how the AI solution would look, it's not that easy. For example, most game dialog is (in theory) crafted by serious writers who work to give the NPC a personality and a unique voice. Seems obvious that for a story to flow, a human needs to at least check it over. Or you get dissonant situations like a rash of arrow-knee wounds amongst the guard population in your world.
The proposition of AI-generated non-linear storytelling is to write AI that:
It makes a quest for Bob, a quest for Mary, and determines how the quests interact. If it wants Mary to attack the player on sight, it draws a bounding box around her that doesn't interact badly with the environment. And the writing has to be passable, and it has to feel right (also note VO is out, unless integrated text-to-speech is used). I want to stress that this is all really, really, really hard. And the mere notion sounds absurd to anyone reading this with a background in computer science.
But out of the 100 people who directly develop a modern game, a large portion of the coders are working on the technology around the game. Instead of what players think of as "the game." This is everything from writing shaders, to implementing features on graphics cards, to helping the artists get a pipeline from 3dMax to the game.
Game features just tend to be engine capabilities which allow designers to make something in the editor or native scripting language. So for example, the Skyrim bookshelves are functionality that some coder probably added to a container, and I'm guessing it relies on very specific parameters for the bookshelf 3d model for it to even work.
All I'm saying is that maybe games could live with less of the latest-and-greatest-in-graphics-technology and neat little widgets like bookshelves and instead spend time reinventing how narrative gets implemented. Graphics have gone through literally a dozen revolutions in the last 20 years. Because the industry cared about it.
Bandwidth on graphics cards have increased by a factor of 16 since 1999. In 1996 the Diamond Monster 3D was a revolution -- a 3d accelerator on a card! (in those days, you had separate 2d and 3d cards). Then the GeForce 1 changed it all again with on-board GPU. We've been through several different form-factors. SLI was popular, and then it was unpopular, and now it's popular again.
We are doing things now that seemed laughably impossible in the 90s. Go Google search any video game from the mid-90s and realize that pixel art wasn't there to be cute, it was there because VGA graphics were all you had. It was made to look cute, because really good artists can use art style to compensate for low quality graphics. You had to do it like this.
And VGA itself was a revolution from the world of 16 colors -- where there wasn't enough graphical fidelity to even have an art style. We have revolutionized graphics numerous times in the last twenty years, but we're still making state-based quests and scripted dialog basically the same way we always have.
Obviously, no sane AAA studio is going to start a revolution with an $85 million game -- that's too much money to be risky. And to Skyrim's credit, it has taken baby-steps in generated stories with the Radiant system for dynamic content. And (again, if the Internet is to be trusted) it made $450 million during the first week of sales, so I'm obviously not saying the current RPG formula doesn't appeal to the market.
Daggerfall was released in 1996. It had spell-creation, enchantment, and a political system. It had a great story. Does the narrative and open-world gameplay of today's sandbox Action/RPGs really represent a 15 year evolution?
I just think we can do better as an industry. Especially in sandbox games that are supposed to be about freedom and choice, where everything in the game but the narrative is giving you freedom and choice.
But without consistency in permanence, you get a game filled with semiotic dissonant moments. Because can something really be permanent if it doesn't register with the very NPCs that the game is trying to convince the player exist in its living, breathing world?
And yeah it may take us 20 years to get there. But then shouldn't we get started?