Harvey Smith (firstname.lastname@example.org)
As an art form, immersive games are in a transitional state, currently positioned on the cusp of something almost unrecognizably different. Future games will employ deeper simulation in order to achieve far greater levels of interaction and complexity, while simultaneously simplifying the learning curve for new players. Most game environments of the past have been based on crude abstractions of reality, limiting player expression and requiring users to learn a completely new vernacular in order to play. The games of the future will rely heavily on much more complex, high fidelity world representations that will allow for more emergent behavior and unforeseen player interactions. Taken together, these next-generation design paradigms are not simply improvements over older models, but represent a fundamentally different approach to simulating real-world physics, handling artificial intelligence and interface usability.
Using the award winning and critically acclaimed game Deus Ex as an experimental foundation for discussion of these new design paradigms, come explore the theories that will bring about the renaissance of the next-generation of interactive exploration.
I Introduction: DX and Me
II Lecture Overview
III Simulation Overview
IV Game Simulation-Specific Systems
V And Beyond
I Introduction: DX and Me
Hello. I’m Harvey Smith from Ion Storm Austin, an Eidos studio. I was lead designer of Deus Ex and I’m project director of Deus Ex 2. This is intended as a lecture concerning the ways in which increasingly complex simulations will lead to richer gameplay environments in the near future. This is my first trip to Canada and the first time I’ve attended the MIM conference. I’m glad to be here. Prior to working for Ion, I worked at two other game companies: Multitude, where I was lead designer of a game called FireTeam, and Origin Systems, where I worked on several games in a variety of roles. I started in the game industry as a quality assurance tester in 1993.Deus Ex, the game our studio finished last year, was a hybrid game that attempted to create an environment in which the player was calling the shots as much as possible. The game mixed a variety of genre elements, including:
- The action and point-of-view of FPP shooters.
The story, character development and exploration of role-playing or adventure games.
- The economic management and strategic expression of strategy games.
Deus Ex tried to provide the player with a host of player-expression tools and then turn him loose in an immersive, atmospheric environment. We wanted to do this in a way that did not limit the player to a few predefined choices, but instead allowed the player to come up with his own strategies within the flexible rules of the environment. We wanted to allow the player to approach the game from the direction of his choice, employing his own play-style cobbled together from the options we allowed. Sometimes we succeeded; sometimes we fell back on more traditional (more limited) means of providing interactivity. The desire to give this talk today was largely fueled by seeing both moments in Deus Ex.
When we did succeed in implementing gameplay in ways that allowed the player a greater degree of freedom, players did things that surprised us. For instance, some clever players figured out that they could attach a proximity mine to the wall and hop up onto it (because it was physically solid and therefore became a small ledge, essentially). So then these players would attach a second mine a bit higher, hop up onto the prox mine, reach back and remove the first proximity mine, replace it higher on the wall, hop up one step higher, and then repeat, thus climbing any wall in the game, escaping our carefully predefined boundaries. This is obviously a case where-had we known beforehand about the ways in which these tools could be exploited-we might have capped the height or something. Most of the other surprise examples I’ll mention today are going to be ‘desirable’ examples of emergence or emergent strategy. But I thought I’d start with an undesirable example because that’s one of the things you have to watch for in attempting to create flexible game systems that behave according to implicit, rather than explicit rules. In any case, we were delighted at the flexibility of the system, of the ingenuity of the players and of the way that the game could, in some ways, be played according to the player’s desires, not the designers’.
When we failed in our attempt to implement gameplay according to our lofty goals and instead fell back on some special case design, players sometimes felt robbed if their actions caused a situation to ‘break’ or if we failed to account for some desired play-style. For instance, many times we included three paths through a map and each corresponded heavily to a play-style like stealth, combat or high-tech hacking. If a player typically resorted to some other play-style (like seeking out water passages and using SCUBA gear to get past obstacles), then that player acutely felt the limitations of what we had offered. Instead of feeling like he was operating within a flexible simulation with consistent rules, suddenly the player felt as if he needed to figure out what the designer wanted-what the designer had explicitly planned as the ‘right way’ to negotiate a part of the game. This problem was even further exacerbated in the few cases where we provided only a single option. For instance, at one point in the game (for plot purposes), we wanted the player to set off a security alarm in one particular research lab complex. There was no way to avoid setting off this particular special case alarm, even for the player who had spent most of his in-game time and resources on playing as a counter-security specialist. Players felt completely robbed. This was a forced failure in Deus Ex, created by a special case break in the consistency of our game rules.
The success cases in Deus Ex tended to rely on the interaction of flexible sub-systems within the game (and were about what the player wanted to do). The moments that I perceive as failures tended to rely on special-case triggering or scripting (and were more about what the designer wanted the player to do). The experiences we had working on DX1 motivated us to move further toward more deeply simulated game environments. I’ll return to Deus Ex off and on, but first let me briefly outline my talk.
II Lecture Overview
I’m going to try to provide a basic overview of simulation, from a gameplay-centric standpoint. I want to include a number of examples of simulated (and emulated) game systems and I’ll use Deus Ex as a case study for problems that occur when trying to increase a game’s possibility space. Afterward, I’ll briefly (and perhaps foolishly) speculate on the far future of such games, and then-if we have time-I’ll open the floor to some questions.
At this point, I’d like to define some terms I’ll be using:
- Granularity/Fidelity of Simulation: Through the course of this speech, I refer to a simulation (or a representational model) as either higher or lower fidelity. A high fidelity simulation would be a more richly simulated model, taking into account a greater number of details. Similarly, I refer to a simulation as being of either finer or coarser granularity. Again, a representation model of finer granularity would be more complex, taking into account a greater number of states.
- Immersive Sims: Immersive Sims attempt to make the player feel as if he is actually within the game’s environment, allowing him to suspend disbelief. While true for many games, for the Immersive Sim, this becomes a primary goal of the design vision. Immersive Sims attempt to model the environment and the interactions in higher fidelity and in a less prescripted, more player-flexible fashion. A simulation allows for experimentation within the system-this is key to the sim experience.
- Possibility Space: Games exist as a set of parameters within which the player is more or less free to experiment. As designers, we are creating a possibility space for the player to explore. The parameters have to demarcate what is possible, the player’s tools have to enable special actions, and the interface and situational context have to communicate to the player what he can do, how effective his attempt has been and why he succeeded or failed. A higher fidelity simulation allows for more a greater range of player expression-or permutations of options and outcomes; therefore, with a deeper simulation, the player has more conceptual space to explore.
- Emergent Gameplay: You could define emergence as an event that occurs, but that could not have simply been inferred from a system’s rules. Emergent behavior occurs when a system acts in an organized fashion beyond the sum capabilities of its individual parts. Imagine a light-detecting sensor on a parking lot streetlamp. When it gets dark, according to the light sensor, the streetlamp comes on. When the streetlamp comes on, crickets are attracted to the surrounding area. Eventually, the bodies of the crickets block the light sensor, so that the streetlamp is on all the time. This is a system. There are simple one-to-one relationships between the individual parts of the system. (Like, the light sensor turns on the streetlamp. Or, crickets are attracted when the streetlamp is on.) But there are also indirect relationships between the individual parts of the system. (Like, the crickets and the streetlamp-the crickets simply were drawn to the light. Yet, at a more complex level that might not be inferred from the simple relationships between the individual parts of the system, the crickets directly affected the light sensor. In games based on flexible simulations, emergence becomes possible, enabling a much wider sum of events than the simple elements of the game would indicate individually.
- Games are all about letting the player express himself.
- A game with a larger possibility space is one that allows the player more range of expression.
- We can achieve broader possibility spaces by more deeply simulating game systems: In comparison to game systems of coarser granularity, contemporary simulation allows for revolutionary levels of player expression.
The Goal of the Lecture:
The idea is to inspire developers from both technical and creative disciplines toward the use of deeper simulation, allowing for more emergent gameplay and strategy. In creating game spaces, we have been moving incrementally toward more complex representations and we are on the cusp of a revolutionary change-a moment at which a great deal of the designer’s creative power will be deliberately passed to the player. While working on Deus Ex, we felt the limitations of our crude game system models acutely; the demand for a higher fidelity game world has gone up past the abilities of old style approaches to game development. (I’ll cite examples of this throughout this lecture.)
In the near future, an increasing number of games, including DX2, will attempt to move closer to a more thorough simulation-based game design, relying on more complex representations. This is not only going to produce more variable, player-driven gameplay, but it’s also going to save a lot of time and money on the production side of development. My goal here is not to bash my own game (as I am sometimes accused), or to pick on anyone else’s game. My goal is to pass along some of the excitement that I’ve picked up (mostly from my mentors) with regard to the potential, near-term impact of deeper simulation in games.
Okay, that’s the intro and overview…let’s get going.
III Simulation Overview
A simulation is a representational model. Computer and video games have obviously simulated aspects of the real world (or some skewed version of it) from day one. Early on, most of the simulations involved were fairly simple. For instance, in Pitfall-the classic Atari 2600 game-the notion of gravity existed; if the player leaped, he moved up and forward, then fell, in a crude approximation of gravity. On the other hand, you could point to Lunar Lander (and a few other space games) as example in which a concept like gravity was modeled in much greater detail, accounting for planetary mass, directional thrust and momentum.
Modern examples of representational game systems abound, from crude models to overly complicated models. For instance, some FPP games have allowed the player to get into vehicles. In some of these cases, the vehicle physics simulation is too crude with regard to the way it interacts with the terrain, allowing the player to get stuck on small hills that it seems like the vehicle should be able to negotiate. On the opposite end of the simulation scale, Trespasser is probably a game that, despite any innovations or strengths it might have had, could be said to have failed because it featured overly complex simulations without the requisite control and feedback. So the vehicle stuck on a small bump is a symptom of a simulation that’s too crude for the game; conversely, Trespasser’s problems were a symptom of a simulation that was too complex for the game.
In the past, games have been mostly about branching paths. The designer manually sets up a number of outcomes or interactions and allows the player to pick one. This merely equates to a handful of canned solutions to a particular game problem. (Some hypertext writings refer to this as “multilinear,” or allowing simultaneously for multiple linear options of equal value.) Deus Ex featured some options for player expression that were facilitated by systems of coarser granularity. (Good examples here might include our branching conversation system or a critical room that could be entered at only three specific spots, each representing a different approach.) Manually setting up solutions to game problems requires a lot of work on the part of the team, can result in inconsistencies and generally only equates to a small number of possibilities for the player. However, Deus Ex also featured options for player expression that were facilitated by systems of finer granularity. (Good examples might include some of the player-tools that we provided that were tied into analogue systems like lighting or sound, such as the ability to see through walls or dampen the sound of movement. These tools interacted with our enemy awareness models in numerous, fairly complex ways. They could be activated at any time in a very wide range of situations, incorporating distance, facing, enemy type, etc.) The finer-granularity systems required more feedback and introduced some uncertainty that equated to some interesting degenerative exploits; but the freedom players felt more than made up for these costs.
Essentially, almost all games involve representational models of reality. So why talk about simulation? What’s happening is that the models are becoming finer in granularity. We’re talking about a scale here, with incrementally more weight being added to the sim side. We’re slowly moving toward games built upon much higher fidelity conceptual models, with greater control or self-expression. At some point, the scale will tip. There will come a point (in part, an arbitrary point) at which gameplay in the average game will be much richer because the player will be presented with a vastly larger range of expressions. Yes, we’re moving incrementally along, but at a certain point, the systems become flexible enough to allow for emergence, at which point the experience is more about the player’s desires.
Example list of slow progress metrics toward more complex simulation:
- Example one: Birds fly up when player enters trigger radius. This is somewhat interactive…it requires player to approach specific spot at least.
- Example two: Birds fly up when player draws within range or when specific events occur. For instance, weapons are explicitly told to broadcast a “birds scatter” type message.
- Example three: Birds fly up in response to dynamically generated stimulus based on lower-level relationships between the unit and the stimulus. For instance, sight of enemy, loud/sudden sound, bright/sudden light, rapid motion. This version could get increasingly complex, depending on how you model the stimulus created by the player (or other in-game agents) like light or sound, as well how you modeled the birds’ perceptions.
This brings up the question: Why should we continue to attempt to build games around higher fidelity simulations? Why is a wider range of expression better? Multiple reasons:
- Simulation allows for more emergent behavior on the part of the game’s systems and more emergent strategy on the part of the player. New gameplay is possible and a larger/deeper possibility space is created. Basically, this means that the player will have more than “a few canned options,” which provides the game with greater potential to be perceived by players as interesting.
- Games typically have more consistency when response to player stimulus springs from the interaction (according to rules about relationships) of the elements of a simulated system (as opposed to when response to player stimulus is derived from a bunch of special case, designer-driven instances).
- As a labor-cost benefit, a better-simulated game environment requires less time to create content. This saves money, but it also allows designers more time to focus on tuning the gameplay. For instance, collectible card games feature an individual card’s rules-of-play on the face of each card. The cards have been categorized into a system, with each card falling into a subclass. As a result, the rules written on each card do not have to explain how the card works with every other card created for the game; instead, each card’s rules only explain how it interacts with a card subclass (or multiple subclasses). To be more specific, imagine a card for the Harry Potter card game (if that thought is not too painful) that stated, “Affects the following cards…” This would require designer consideration of each card, it would require lots more space and lots more writing, plus it would preclude our example card from working with any future, unplanned cards. By instead using a system-with global rules governing the relationships between subclasses of cards-the game a) does not require the designer to consider every possible permutation, b) it allows the card to function with future card releases and c) it allows for emergent strategy. (Which leads to our next consideration…)
There are also a couple of side effects of setting out with the goal of creating games around deeper simulations:
- Emergence in games is mostly a benefit with potentially wondrous ramifications, but also something of a cost. In a flexible system in which designers don’t attempt to provide an explicit relationship for every element in the system, uncertainty is introduced. This often leads to interesting implicit consequence-players can formulate plans that spring from indirect interactions of the rules system. For instance, in the online strategy game ChronX, a player can obtain and use one of the game powers to enhance an organic unit (like a human soldier), making it a more powerful ‘mech’ unit (or a sort of cyborg). Normally, making an enemy more powerful is not something you’d want to do. However, if he has access to it, this player can then use another game power-one that steals enemy mech units-to cause the now-more powerful, now-mechanized enemy soldier to switch sides. The first card-normally played on a player’s own units to enhance them-does not have an explicit relationship with the card that steals mechs, but they work well together if the player sees and exploits this emergent strategy. Unfortunately, the uncertainty introduced by this approach can also lead to exploits that break the game. Bulletproofing against these exploits requires time and effort. (The Deus Ex ‘proximity mine climbing’ method I mentioned earlier is a good example of such an exploit that we didn’t catch.
- Another side effect: Purely on the downside of the flexible rules system approach, better user feedback is required to avoid confusing the player, since a more complex simulation usually equates to a more granular range of player expression. For instance, some games have emulated enemy awareness using directional facing. In other words, an enemy unit can only see what is in front of it, within its field of view. Thief (by Looking Glass Technologies) came along and introduced a much deeper awareness model, involving complex sound propagation and lighting that acted as stimuli that the enemy could perceive. Since understanding lighting and shadows was key to the player’s success as a thief, the player needed a really good indicator as to how well lit he was at any given time. Since Thief is a first person perspective game, the designers added a “light gem” feedback device to inform the player as to his current light-based visibility. Thief asks the player to understand a much more complicated model, but it also helps the player out by offering some information germane to that model. Using concepts like noise and shadow, and elements like thieves and guards, Thief also puts things into a familiar, realistic context. While ‘realism’ itself is not always the goal in a game, using game settings and elements that relate to the real world-with which the player has great familiarity-often helps make the game inherently more intuitive, sidestepping some of addition cost. For instance, if you use elements like “fire” as a part of your game systems and if it actually behaves like fire does in the real world, players will probably have an immediate understanding of this element without requiring the game to educate them.
IV Game Simulation-Specific SystemsI’ve talked some about specific systems in passing-Thief’s sound propagation and lighting, for instance. Now let’s get more specific:
Sound/Light and Unit Awareness
Many games model ‘enemy awareness’ in some way, attempting to simulate the real-time gathering of information. In most combat games, for instance, enemies perceive hostile or suspicious events. I think we’re at a point where traditional models for perception are just not enough-relying on such models is having an increasingly negative impact on overall gameplay.
For instance, in DX1, sound propagation worked like this: A sound event was broadcast in a sphere outward from a source, ignoring wall/floor surfaces (as if the sound were generated in an empty space). Taking distance into account, units within the broadcast would be alerted (i.e., would ‘perceive’ the sound). A different model was used to determine whether or not to play a sound for the player (involving a line-of-sight check to fake dampening a sound if it was playing through a door, for instance).
By contrast, let’s look at our plan for sound propagation in DX2 (which we think is the next step in the direction undertaken by Thief): A sound event is broadcast in a sphere outward from a source. In cases where the sound hits a surface, we bounce the sound, taking into account the material applied to the surface. (So that carpet muffles the sound, for instance.) The number of bounces is capped. Taking distance into account, units ‘perceive’ the sound if the sound reaches them, directly or by bounce. The same model is used for both player and game unit (or guard) to determine whether the sound is perceptible. Certain acoustic aesthetic effects are ignored on the AI side, but these have nothing to do with whether the AI perceives the sound.
The first model (the one used by DX1) did not always allow the player to predict whether a game unit (like a guard) would hear a sound or not, which led to some really unsatisfying occurrences: Either a guard would hear the player (when the player assumed that he was acting ‘quietly’), or the player would make sound that he assumed a guard should hear (but the guard wouldn’t, making the game’s awareness model feel broken). We think the second model (the one being used for Thief3 and DX2) has the following benefits: We can unify player-related and enemy-related sound propagation, which will allow for a more intuitive game environment. The player will be able to make assumptions about whether a guard will hear him or not based on the player’s own perception of sounds in the environment. We also hope that the higher fidelity model will equate to a more ‘fair’ gameplay model; guards will not hear sounds that are blocked by multiple thick walls. (Again, this will allow the player to make some strategic assumptions, closing a vault door before operating a noisy tool, for instance.)
Anecdotally, I want to mention that DX1 players already do things like closing doors before taking actions (because that is the intuitive thing to do-something we learn from childhood forward, trying to trick our parents and siblings). If players do this, but realize that the system does not take something like a closed door into account, they feel cheated or let down. If they’re going to do it anyway, it makes some sense to model the game according to their intuition and assumptions; we don’t want to pass up the chance to squeeze in an interesting, intuitive game dynamic. (This is a good example of a deeper simulation leading directly to more player expression, more gameplay.)
Currently physics is useful for establishing player-action capabilities-limitations related to movement speed, falling damage, gravity, etc. But over the last few years moving toward more realistic physics has had other significant gameplay ramifications as well.
First, a comment about the word “realistic”:
In games, realism is not necessarily the goal. But if the world seems to behave consistently and in ways that the player understands, it seems that the player has less difficulty immersing himself in the environment, suspending his disbelief. In this way, realism in games is related to intuitiveness and player expectation. (It’s also worth noting that if you set up an environment that seems familiar (and thus is intuitive) then you thwart the player’s expectation of that environment, the player often finds it extremely jarring. For instance, we included telephones in Deus Ex and gave them limited functionality. Their presence helped the player identify, say, an office space as a familiar, real-world location. However, we could not possibly make the phone in the game as flexible and powerful as a real-world phone is, and the lack of functionality in the game-phones served to immediately remind the player that the office space was “fake.” It might have been better to leave the phones out altogether. So realism is not the point (even though it can be useful).
Continuing with “realistic physics”: The first game I played that allowed me to realistically bounce grenades around corners was System Shock. Bouncing grenades around corners is an example of “physics as gameplay.” It’s one step less direct: Instead of going toe-to-toe with an enemy, the player can take up a safer (more strategic) vantage before attacking. The player suddenly had new, interesting options. It also makes the environment more dynamic: If someone moves a crate out into the center of the room, a grenade can then be bounced off the crate. Obviously, collision physics that allow for grenade bouncing gameplay have been around for a while. But the more thorough and more realistic physics simulations of the next generation of games should have interesting ramifications. To cite some examples:
- New gameplay tools: If we track mass and gravity, for instance, we can arm the player with a tool that increases mass, allowing for all sorts of interesting effects. This is one of the goals of our studio-to continue to widen the range of gameplay tools beyond “more guns.” Not because we dislike games with guns, but because we are looking to make the game more interesting…to expand the possibility space.
- More intuitive environment: “Of course paper should burn.” (In today’s games, casual players might be baffled by the physics of the world: Only explosive barrels and bodies burn, sometimes pieces of light furniture cannot be moved around, the player-character can often not perform simple tasks like climbing up onto a desk and sometimes glass does not break. Why *wouldn’t* this harm accessibility? To play, you must re-learn the physics of the world, like a child.) When the world works in a way that makes sense to a human (non-gamer), because it functions in ways that reflect their lifelong experience, the average person is more likely to find the game environment “intuitive” even in fantasy realms and alien dimensions.
General Game Systems: Tools and Objects
In the past, gameplay tools (including weapons) had to have explicit relationships with any other elements of the game in order to affect those elements. So a weapon class, for instance, specifically contained code listing all the things it could affect. For instance, to use a simplistic example, if you wanted the bullets from a gun to break a window, you had to set up a direct relationship between the weapon entity and the glass entity. Now, there’s an additional layer of abstraction between the two: The weapon projects a bullet entity. The bullet entity carries with it information about what properties it carries (like ballistic damage, heat or electricity, for instance) and the glass is a stimulus-receiving entity. When the bullet meets the glass, the game’s object/property system looks up the effect of the bullet’s properties on the glass entity. There is a set of rules about the relationships between these general-case properties.
How is this different, from a pragmatic standpoint? The latter, more flexible approach (with the layer of abstraction between the bullet and glass game elements) has the following benefits:
- Global consistency: Game environments now include thousands of object types. Using the old method-involving direct, special case relationships-it would be easy to fail to create a relationship between something, say, like a potted plant and a bullet. So the bullet might ricochet and fail to break the potted plant. This counter-intuitive physical interaction between the plant and the bullet might break the user out of the experience by defying his intuitive expectations. In the more flexible system (in which the bullet merely carries stimulus properties to which damageable object subclasses can respond), everything is more likely to be covered, instead of only the things that were manually given stimulus-response relationships.
- Time saved: Also, since we’re talking about an environment hosting thousands of objects, instead of hard-coding everything, programmers can build tools that allow designers to attribute properties to any new object class via a simple tag. So this model saves development time.
- Emergence: In Deus Ex, we found that players (initially just in QA, but later among the game’s fans) were using an emergent strategy that had never occurred to us. One of the unit types (an MJ12 soldier character) exploded upon death. Our idea was that this would cause the player to react strategically, switching away from a pointblank weapon when fighting this unit. In a more traditional game systems model, we would have created an explosion entity with an explicit relationship to the player, damaging the player if he was within range of the explosion. However, in our more flexible system, we simply spawned a generic explosion with properties related to concussive/ballistic damage. Players figured out that they should lead this unit near a locked container before delivering the final blow. When the explosive unit blew up, it inflicted damage on the locked container, opening it up. (We did not plan this or even foresee it-it just worked.) In this way, players were exploiting the system in order to open locked doors and safes (without spending any lock picking resources). We were delighted.
It’s largely due to hardware limitations and the nascent state of interactive entertainment that games have by necessity relied on cruder models in the past. No single game project of which I’ve been a part, including Deus Ex, has fully taken advantage of all the opportunities to provide the player with as much exploration and expression as possible. With that qualifier, I will relate the following example:
Recently at one of the game industry’s conferences, I had an opportunity to see the demo for an upcoming game. I’ve been excited by this game for quite a while. It’s essentially an adventure or role-playing game that allows the player to explore a fictional world, building up his power so that he can face increasingly tough threats, while uncovering new pages of the game’s plot. This is a traditional conceptual model, but a popular one that has provided a lot of enjoyment over the years. This new game looks and sounds beautiful; I fully expect it to be a lot of fun. (I’ll be buying it…) But after talking to one of the developers and watching him play the game, I cannot help but point out how I think that the designers have missed some opportunities. The game seems to feature an extensive set of player tools and powers. However, most of them are purely related to inflicting damage. The rest of the environment is modeled in a very simple way. The game uses a traditional paper RPG-style ‘spell’ system, which should allow for a great number of interesting player expressions, even if you restrict your thinking to the tactical arena. So, during the demo, I inquired about types of spells that, in paper RPG’s, are often exploited in interesting ways beyond toe-to-toe combat. For instance: Can the player freeze the water pool (in the cave featured as part of the demo) as a way of creating an alternate path around an enemy? Can the player levitate a lightweight enemy up off the ground and thus get by it without resorting to violence? Can the player take the form of a harmless ambient animal and sneak past the goblin? Can the player create fake sound-generating entities that distract the enemy? I believe the answer to all these questions is “no.” The game was designed around pre-planned, emulated relationships between objects. Had the game been designed around a more flexible simulation, these sorts of interactions might have just worked, even if they had never occurred to the designers. (All of this still might be possible in the special case emulation model, but would run the risk of a great deal of inconsistency, would require tons of work and would not as likely produce emergent results.) Had the game been built around more thoroughly simulated game systems, creating more interesting (less combat-centric) tools would have been easier-the game’s possibility space would have been greatly enlarged.
By contrast, let’s look at the gameplay tools given to the player for the game System Shock 2 (by Irrational Games and Looking Glass Technologies). There was a web post about a player who, when under attack (by a mutant and a turret) and completely out of ammo, used psi-telekinesis power to pull an explosive barrel toward him, moving it through the firing arc of an attacking turret. The turret blew up the barrel, destroying the turret and killing the mutant. No one on the System Shock 2 development team explicitly set this area up with this outcome in mind; these things emerged from the game’s general-purpose approach to gameplay tools interacting with the other elements at the whim of this (clever) player. This is a really good example of a flexible, consistent set of rules, very similar to our bullet/glass or collectible card game examples from earlier: Rules about the relationships between the game’s objects and tools had been established at a high level. No code or scripting specifically related to the idea that the player’s psi-telekinesis could pull barrels in front of turrets; instead the psi-telekinesis was set up to affect moveable objects, the barrel was tagged as a moveable object, the turret projectiles were set up to affect explosive objects and the barrel was set up as an explosive object. And everything just worked.
Again, as a downside, in attempting to create flexible game systems (that behave according to implicit, rather than explicit rules) problems are caused by undesirable exploits. So efforts must be undertaken to bulletproof against anything that outright breaks the game.
Unit Needs and Behaviors
Most game units have very limited awareness of their state, needs or environment. They generally don’t need any greater awareness: Imagine a racing game in which one of the drivers was distraught or suicidal because his Sim-girlfriend had just broken off their relationship. Sounds ridiculous. But imagine a racing model in which the drivers were intelligent agents who were aware of their car’s current fuel needs. That sounds interesting (to me). And, to integrate some of what we’ve talked about, imagine that this self-driver then uses the game’s thoroughly modeled aerodynamic system to ‘draft’ behind another racer to conserve fuel. For all I know, people making racing games might already be doing this-my point is that the deeper simulation in our hypothetical model provides a much larger possibility space. The self-aware driver provides a more interesting AI opponent and the wind-drag model allows the player to take more strategic elements into account and act upon them.
The deeper simulation of additional aspects of a game does not inherently make the game more fun. But if you choose the ‘right’ aspect to simulate, you can make the game more interesting. For instance, DX combat featured units that would run away if they realized they were badly wounded. This did not make combat more fun, but it made it one step more interesting than a toe-to-toe shootout. Players remarked at how it prompted ethical decisions: Track him down and shoot him in the back, or let him go, since he is no longer a threat? For DX2, we’re thinking of ways of expanding upon this idea, allowing units within a group to maintain awareness of group needs as well as individual needs. These leads to some obvious ideas: A medic squad member, a commander, etc.
Another direction in which we’re trying to move for DX2 is real-time IK-based movement. People talk a lot about it, but we want to use it for gameplay-specific purposes. With IK pointing, touching and head movement, suddenly character movement is not limited to what an artist has pre-defined. With IK, we can model more on the unit’s response-to-environment side. The IK will let a unit flexibly act on its desires. For instance, if a bystander thinks that the left door is the one that the police should open, it can point in real-time to the left door. (While not a needs-based behavior, the IK facilitates expression of this behavior. The IK ‘body language’ communicates AI state and change to the player.)
V And Beyond
What comes next? Clearly we’re moving along a curve of greater hardware capability, more elaborate software systems and a more sophisticated understanding of our nascent art form. What’s the next revolutionary gameplay angle someone will exploit by figuring out a deeper, more interesting way to model a game system? I can’t say with certainty, of course. But I can look at the last cycle of games and point to two interesting, noteworthy examples:
- Thief looks on the surface like a shooter. However, the game design team at Looking Glass decided to model sound propagation, lighting and AI awareness in a much more complex way. In doing so, they greatly expanded the possibility space of the first-person-perspective shooter. They were smart enough to know that their approach required them to provide the player with a great deal more feedback.
- The Sims (by Maxis) created a character “needs” model that, while it seems fairly simple, is far more complex than anything used to represent the moods and needs of most game characters. (Most game units, of course, have no concept of anything much more than whether they can see an enemy. Even in all the games that rely heavily on the game industry’s meat-and-potatoes of faux combat, units generally fight until they drop dead (instead of running away when badly wounded), fail to intelligently switch weapons (based on the situation or upon enemy defense), and lack any significant amount of tactical awareness with regard to their squad mates. In creating their character needs model, Maxis created a sandbox of possibility that was entertaining to explore, conceptually. It didn’t feel like a game-in that there were no hard-and-fast victory conditions and little in the way of artificial conflict-but through its flexible system it allowed the player a lot more expression than most games.
Someone in the next cycle, I hope, will pick out a new area, model it in a high-fidelity way that can be made interesting for the player, and will contribute their own part to the revolution. Maybe they will leapfrog from Thief’s sound/light/awareness simulation in another stealth game, or maybe they’ll pick up where The Sims left off and create characters that seem remarkably alive, with feelings, moods and relationships.
But what lies beyond the short-term? (This part is for fun-something to embarrass me in the future, like an insulting note from my past self.) How will games be different a decade from now? Here’s some hopeful and perhaps provocative speculation:
- Speech Synthesis and Dynamic Game Conversations: Imagine if the game could assess a situation based on a long series of relevant player inputs, string together some responses and construct a convincing verbal response using a speech synth system. Suddenly, vastly more interactivity is possible. Once again, instead of a few canned responses (provided by the designer), the game could allow for a much wider range of responses-games might someday be able to analyze voice input and formulate a conversation that never had to be written by a designer…a conversation of much greater relevance to the player’s actions. (And when speech synthesis is combined with true artificial intelligence, narrative games will finally become truly interactive.)
- Long-term persistent games: The player starts a game and plays it for years (or his entire lifetime) as it wraps itself around his choices. The more he plays, the more unique the game gets.
- Auto-generated content: At some point, games might dynamically generate terrain and architecture, creating entire cities on the fly, based on some parameters. Also, units (or characters) will be created in the same way. Building all this around player input-or past player decisions-will allow games to spin out alternate futures based on the player’s initial moves.
- Intelligence Vs Multiplayer: Most of us have accepted MP as the future. But if AI entities were as smart as people, wouldn’t narcissism dictate the desire for SP? Would you rather have 4 obnoxious roommates or a really good dog? Some experiences might be better qualitatively in a SP environment. For instance, is it spookier to explore a haunted house alone or with 100 people? Also, MP games currently use fairly static, traditional environments and rely on the agency of other players to create interesting (or emergent) interactions. Immersive sims are SP games with huge emphasis on creating an interesting (dynamic, interactive) environment and an expressive set of player tools, hopefully (increasingly) built using simulations. Imagine if you combined these two.
This wraps up the lecture. I hope you enjoyed listening as much I enjoyed preparing. Actually, I hope you enjoyed listening a great deal more than I enjoyed preparing.As games continue to rely on increasingly realistic or complex simulations, obviously we’ll have a bunch of problems to solve related to uncertainty and user feedback. But the end result, if we solve those problems, will be unprecedented possibility in games. Striving for finer granularity in the representational systems we create for games should allow players much more freedom of expression and should make the ‘game’ experience more about the player and less about the designer. We want players evaluating their environments, considering their tools and formulating their own strategies with as little regard as possible for what we as designers might have wanted them to do. Older game genres might be completely reinvented when built upon deeper simulations. Additionally, new game forms will emerge. Even though this approach involves the designer surrendering some control of the game’s emergent narrative to the player, ultimately this should prove much more creatively satisfying; our goal is to entertain, to allow players to interact and express. In the future, we might only be “designing” games at a higher level, establishing parameters and allowing the players and the game’s intelligent agents to work out the details.
Lastly, before I stop talking, I’d like to offer special thanks to the people who have taught me what I know about design and development, without whom Deus Ex would never have been made: Doug Church, Warren Spector, Marc LeBlanc and everyone at Looking Glass and Ion Storm Austin. Thank you and goodbye.
Pingback: You will soon have your god, and you will make it with your own hands. « OneGate
Pingback: The Future of Game Design: Moving Beyond Deus Ex and Other Dated Paradigms | Design & Development Notes