This is the fourth post in a 6-part series on Designing for Interactive Story.
A Few Newish Concepts
One concept that hasn’t been used much, but probably will get applied more as world simulation becomes more commonplace, is the notion of NPC behavior being based on response to generalized attributes and properties.A big advantage to this type of approach is that it makes NPC behavior much more flexible, and allows them to react to anything with attributes.It means that as the PC or other NPC attributes change, the NPC’s behavior towards these characters can change as well.Another advantage of this type of a system is that NPC characters can be picked up and dropped anywhere in the game world; similarly, new NPC characters can be introduced during the game, and existing NPCs will know how to respond to them because they are really responding to the properties. To make this clearer, imagine an NPC that is programmed to hate anything that is green, and programmed to attack anything that is weaker than it is, and run away from anything stronger.These are simple attributes that lead to complex and variable behavior. You, the player may drink a potion and turn green, and now the NPC that was nice to you, hates you.It goes to attack you, and you pick up a sword and now it turns and runs away. This NPC is capable of doing these behaviors with any other NPC it meets depending on their qualities as well.This simple concept can be extended to great effect.(Scribblenauts, Spore, Doki-Doki Universe)
The most common way to control the flow of a story is by tying it to spatial elements on paths, or on a map, and then controlling the player’s progress through this space.This is certainly not the only way to control story flow. A few experimental games like Façade and PromNight, Cart Life, or The Sims, have made interesting attempts to control story flow in a single, limited space.Just like the other games we’ve talked about, these games still have conceptual gates.These are conditions that the player meets which trigger story events.These could be story pieces that get unlocked in a fixed sequence, but designers who take this route almost always take a more simulation-esque approach and break their story events into “possible events” that are conditional, and that can lead to other events, making the outcome less certain, and giving players more control.Façade even goes so far as to create a “story manager” that tries to guide events by prioritizing outcomes based on their effect on ‘story beats’ and a story climax etc.
As NPC AI gets more and more sophisticated, one possible approach to controlling story flow might be to use the goal-states of some key NPC character.In a sense, the NPC itself becomes the holder of all of the ‘gates’.Imagine that an NPC has goals that it wants to satisfy in the world and gives it a purpose.You, as the player are allied with this NPC’s goals.Your goals may be to help or to hinder the NPC, but either way, your goals are defined by its current goals.As the NPC’s goals shift, your goals shift.As the NPC achieves its goals, world states (or story states) change and you enter new chapters of the story.This method (which may be getting used already – but I haven’t seen it) would feel very organic and would make the story progression feel less ‘puzzle-y’, and more character based.
Another approach that has probably been used already in a few games (but I can’t think of them right now) is the technique of giving the player-character some autonomous qualities.Put another way, your avatar, who is almost certainly the main character in the story, may have a will of its own.It may not always want to do the things you “command” it to do.This puts you more in the role of influencer than controller.Perhaps you are a god, or the conscience of your character.Perhaps your character even addresses you directly; getting pissed off at the things you have asked him or her to do, and perhaps you have a means of conversing with this character.This is an interesting way to make your character have more of a believable presence and gives you yet another possible point of emotional connection to the game world.
Almost always in games we interact via some avatar who is part of the game fiction.There have been a few experiments where players interact with the characters in the game as themselves – essentially talking to or dealing with an AI character on the other side of the glass.Breaking through the 4th wall and being “seen” and recognized by a smart NPC character is one way to get people to connect emotionally.So far these experiments in direct interaction have been more about showing off some new peripheral device, and have felt perhaps a bit gimmicky, but at some point this technique of “breaking the 4th wall” will be used in a deeper way to allow players to make a connection with AI characters that feels extremely immediate and real.Being anonymous allows us to stay at an emotional distance, while being ‘seen’ and vulnerable essentially forces us to feel something.
One very simple way to get an emotional reaction from players is to encroach on their personal space.As much as we like to think of ourselves as intellectual beings, we are also products of millions of years of evolution and our animal brains kick in and take over when certain stimuli-buttons get pushed.This technique is possible to do in a horror setting by using darkness, and then making things suddenly appear very close to the screen.Many horror themed games do this (e.g. Five Nights At Freddies).In evoking gentler emotions, this ‘invasion of personal space’is harder to achieve unless one straps on VR goggles, or perhaps uses tactile, haptic devices. A VR game called Summer Lesson is using this technique to the extreme and trying to make players have a visceral response to NPC characters by having these characters get extremely close and invade their personal space.Given the young male audience they are appealing to, and the fact that it’s a Japanese game, you can probably guess what type of NPC characters they are using to get this reaction. (nudge nudge wink wink).
If we start from the premise that we want to create great story experiences for people, one of the first issues we run into is creating interesting interactive characters.While it’s true that some stories have been made with a single “hero” character, and no other characters in the world, one has to admit that this is severely limits the type of stories one can tell, or the degree of detail and subtlety one can deliver.The same observation can be made about the importance of language in stories.Some beautiful movies and games have been made with little or no language.There can be a certain grace, and mystery to this extreme level of simplicity.Still, think of all the stories you’ve loved over your life, and how many of these stories had no talking in them.So…that brings us back to the recurring difficult question… if we need characters, and we need language to make compelling stories… “how do we deal with language”? A few pages back we talked about conversation trees and players selecting fixed choices from a menu.We didn’t mention Natural Language Processing (NLP).This is the ability to allow players to type anything they want, or better yet, speak naturally.The game then accurately translates their speech into text, and the program parses their input into something it can recognize.Next, the game code selects an NPC response from a set of pre-made phrases or communicative behaviors.Let’s remember that communicating doesn’t necessarily mean talking.We humans do at least half of our meaningful communication in the non-verbal realm, with our bodies and faces.Something as simple as looking away and not answering is an effective communication; action and expressions convey feelings or intent.This is an area that has rarely been touched on in games.One very old experiment in communication was a product called Seaman on the Sega Genesis.Another product, much more current, called Milo, allows players to speak naturally and it tries to recognize tone, facial expression, and body language.Needless to say, there are many problems being tackled at once here.
An entirely different set of challenges lie in the realm of generative language.This is a whole other ball-o-wax, as it were.Generative language, for anyone who doesn’t already know, is the process of constructing phrases out of individual words based on some internal notion of “meaning” and context, coupled with rules of syntax.This is of course what our brains do (unless you happen to have a set of pre-recorded phrases built into your brain).Representing “meaning” and creating systems that can, in some sense “understand”, is a super exciting emerging area of AI, but obviously, not one to be jumped into lightly.
Here is one last, minor footnote having to do with Natural Language (NL) in games. This has to do with the difficulties NL imposes when dealing with localizing a product for other countries.This is hardly the biggest barrier to implementing NL, but one of many that have kept this from seeming to be cost effective for developers.That said, with our coming age of AI and VR, and with text to speech finally reaching a point where it is reliably usable, we will be seeing products pushing the boundaries with Natural Language and Expressive non-verbal communication.