Thinking like the audience…
(Visited 12105 times)Think Like a Player! provides us with this handy list derived from the not-dead-yet-dammit world of interactive fiction:
The Player Doesn’t Know What’s Important “…authors know what’s important in the text that’s mentioned and players don’t…”
- Don’t bury relevant messages
- Don’t make the player have to remember too much
The Player Doesn’t Think Your Game Is Special “…games are special and magical and beautiful to the authors. To the players, on the other hand, they’re exactly the same as the three dozen other games waiting for their attention…”)
- The game must have something cool about it
- This should be the most interesting story to be telling
- Don’t do stupid things out of habit
The Player Can’t Read Your Mind “Important things should be in the game, and things in the game should be important.”
- Puzzle solutions should be conceptually reversible
- The game shouldn’t tell the whole story, it should tell the right parts of the story
It’s hard not to look at lists like these, Ernest’s lists in his Gamasutra articles, the one I have in A Theory of Fun, and even Noah and Hal’s 400 Project… and then not feel oddly ambiguous about it all.
After all, there’s no such set of rules in writing music. There’s pretenses towards them in writing movies, but we all know how formulaic movies have gotten.
This observation, though, strikes me as having real insight into player psychology:
This is really easy and some people totally fail to grasp it. So here it is: when people talk about a game being ‘fun’, what they mean is the fun density, not the absolute amount of fun. Therefore, if two games have the same absolute amount of fun things, the longer one will be less fun. And that means that if you make your game really long, you had better put a lot of fun stuff in it, or it will suck.
This is all interesting to me because there have been some really interesting articles lately on measuring human responses to music, and identifying common patterns that make people like a piece of music or not. For example, there’s the music service Pandora, which classifies music based on patterns inherent in the songs, and lets you build a personalized playlist that has things like “folk rock qualities, a subtle use of vocal harmony, mild rhythmic syncopation, mixed acoustic and electric instrumentation, and major key tonality.” That would be what you get when you start by putting in Ellis Paul as a starting point, anyway.
Pandora originated with the Music Genome Project, which attempts to deconstruct core elements of music on the grounds that a reductionist approach will tell us what elements trigger specific responses. This is essentially the scientific principle of actions and reactions illustrated. And it does seem to provide results — not only does Pandora have a reasonable playlist after I give it enough inputs, but there have recently been a slew of news bits related to this sort of approach to human taste. I see on BoingBoing today an article about a project assessing the worst sound in the world, for example.
And then we have approaches such as that of HitPredictor, the widely reported MIT research project that can predict hits.
The goal is to pinpoint trends in pitch, rhythm and cadence that are driving consumer spending habits. However, the MIT researchers believe they’ve taken the science to another level.
“Some people really care about instrument sounds and complexity of the music,” Whitman said. “But the 14-year-old teenage girl could care less, as long as her friends are listening to it.”
So here we see a social approach to identifying tastes, one that sees currents in how tastes change. It’s undeniable that as time goes by, tastes become more sophisticated. For example, as sophisticated melodically as classical music can be, it’s still generally far less rhythmically complex than more contemporary music — even rock ‘n’ roll with a straight-ahead backbeat is often more diverse — and the public’s appreciation of exotic harmonies has grown tremendously. Even though the wilder experiments of modernist music haven’t been accepted by the record-buying public yet, it’s hard not to look at the sales of singles by Dave Matthews, with their 7/8 time signatures, and not realize that somehow, the public’s overall “ear” has grown more sophisticated.
And yet, one still has that reductionist impulse. After all, simply following the social trend is a good way to end up making pablum; we usually see the difference between a good hit song and a bad one, given enough time. Some songs just don’t have staying power, even if they captivate quickly.
In music, fortunately, an expert sound engineer went looking for common factors, and found them.
Make note of that ‘vertical bar’ look because you’ll be seeing it over and over in music that’s had the power to command sustained catalog sales. The sound is produced by high contrast between the transient attacks of instruments and the background space where the instruments are placed, and you’ll be seeing more charts and illustrations of how this is accomplished…
…These albums SELL in a way that completely outclasses what the industry does with the hit-record-of-the-week.
And the more strongly they sell, the more likely it is that they will have High Contrast characteristics- specific characteristics that are recognizable in charts and measurements. There is a common factor shared among even very dissimilar multiplatinum hit records, having to do with the distribution of peak amplitudes, and this is exactly what is destroyed by current high-level mastering practices.
Go read all the analyses — seriously, it’s worth it even if you aren’t a musician or recording engineer.
Now, something like that begs the question of whether it’s possible to think like the audience if the audience doesn’t actually think at all. Are we all just responding to set stimuli? To what degree does taste matter? How much of taste is merely conditioning (I suggest this in my book, actually), essentially training to like a given set of acoustic patterns?
And that sort of question makes us turn around and ask a whole new set of questions of something like the game design lists above. Learning to think like a player is incredibly difficult, and probably the biggest challenge for many designers. Putting yourself in the player’s shoes requires a trick of perspective that can be difficult precisely because it demands ignorance, and it’s hard to let go of what you know. Almost all of the rules that Dan Shiovitz gives us from the IF world in his article relate to the relative levels of knowledge that the developer and the player bring to bear on something.
If we found a set of rules that were less like that, that were actually reliable guides to “good controls” (like the stuff Ben Cousins found with jumping times, and as he is trying to do with his own atomic model of game design, which is very intertwined with mine described in A Grammar of Gameplay), or with level lengths, types of challenges, etc, what would that mean for the industry?
Would the music biz be better if every record were mastered the same way as “Stairway to Heaven” or “The Stranger”?
Or is the secret actually, as Shiovitz suggests, the density of fun?
Some nice questions to ponder.
16 Responses to “Thinking like the audience…”
Sorry, the comment form is closed at this time.
Blogroll Joel on Software Raph Koster Sunny Walker Thoughts for Now Sex, Lies and Advertising
Bartle?s 5 most important folks in virtual worlds [IMG] Posted by Raph’s Website [HTML][XML][PERM] on Fri, 20 Jul 2007 03:12:38 +0000
I think we can go around in circles all day long about what’s fun and what should be in a game in terms of rules, but perhaps the crux of the matter is we’re avoiding definition of what a game really is in the first place.
Ben Cousins *is* on the right track with his atoms, and your attempt to form a notation for them is also — however the missing link is not only to define the game side of things, but involve the player. That’s done through communicating the game to the player, and with videogames that’s a predominantly visual medium.
My research (check my homepage) is about this game-graphic mapping. Your stuff, Ben’s stuff, the stuff of Chris Crawford, etc., it’s *all* on the right track regarding breaking down a game, but only a few people like the enigmatic Will Wright explicitly talk about wrapping the game in a (graphical) metaphor, because otherwise, as he says, it’s “just a bunch of numbers”.
So we can make up all the rules about games we like, and surmise what really makes a game fun, but until we start recognising the game-graphic communication thing, we’re either limiting ourselves to discussing games or discussing graphics, and not the interface between videogames and players, which is the key. But again, I could be wrong 🙂
I’m not really worried about the definition of game at this point; between the work I did for AToF and the various definitions that are out there already, I feel that it’s a fairly well-understood area.
I agree that examining the presentation is the third step; I’ve always thought about the problem as being in three parts: the objective, the mechanics, and the metaphor (not the same as Mahk’s MDA scheme). However, I also think that the metaphor is the part for which there is the most prior art, since virtually everything in our metaphors is borrowed from other media. So I feel less urgency in looking into that.
I do think that thinking that graphics is the means of communication is a mistake; there’s games which are non-visual (the old swimming pool game “Marco Polo” jumps to mind).
Oh–you asked whether I was extending Ben’s work directly in doing “Grammar of Gameplay” — not really. He & I both hang out on a forum together, and I did the first pass of the Grammar stuff separately from seeing his work… then I found it, and started borrowing some terms because it made sense to build a common vocabulary. Since then, we’ve tossed stuff back & forth a bit, and at this point, I think the two approaches are basically intertwined.
Completely agree with you on the objective/mechanics and metaphor definition.
I think a lot of the hot stuff out there at the moment is very much intertwined, and that’s a Good Thing because it’s like a filtering process through which the “pure” definition of videogames will emerge. You know, unless everyone’s wrong and the world isn’t really flat 😉
But back to your “thinking that graphics is the means of communication is a mistake” — true, Marco Polo is an example of a non-visual game, and while there have been “audiogames” (games where sound is the main feedback to players) on computer/console, videogames are predominantly visual, hence “video”-game.
I’m not talking about borrowing from film or fine art for our definitions, because that’s very medium-specific, and those are (mostly) non-interactive (noting digital art installations of Zack Booth Simpson et al as exceptions). It’s been a major cause of stunted growth in videogame design to borrow from narrative media rather than create a new ontology.
In that regard, I feel that areas such as Information Visualisation (Tufte, etc) has been overlooked, and another area, Mathematical Game Theory, which has a wealth of concepts and terminology to define games and “gameplay” (strategies, moves, etc.) is mentioned, but not often enough, or in depth apart from references to things like overcoming “dominant strategies” to balance gameplay.
So instead of inventing new rules, why not borrow old ones, but… not from other media per se, but from other disciplines (Visualisation, Game Theory, etc.) that are very much to do with the mechanics of games, i.e. a game is a model of a conflict situation, and we have to graphically communicate (with videogames at least) that to players.
Well, to start with, I don’t regard videogames as somehow special or different from board games or other forms of game. When I quantize them down to atoms, they all look the same, so I erase the distinction in terms of theoretical work.
When I refer to applying stuff from other media, I was referring to the fact that stuff like film studies, color theory and graphic design, acoustics, and so on have a host of things to teach us about presentation, what works, and what doesn’t. I include Tufte’s work in that category. Pretty much everything in the “metaphor” category in videogames is effectively borrowed or making use of the techniques of other media, IMHO. I think that videogames have relied too much on the metaphor and that this has stunted the growth of game design, but that doesn’t mean that there’s not still a lot to learn from other media in terms of presentation.
Game theory has some stuff that is applicable, but I’ve never found THAT much use in it when directly applied. I even took some potshots at it in the book. 🙂 I’d be curious to hear more about what you have found that seems directly applicable.
Yeah I remember looking at some of the mathematics of Game Theory and just thinking “ok, close the book, and walk away slowly. Here be nerds”… but really, just the concise terminology and definitions of what a game is, that’s the inviting part. You can take a model of a conflict situation, whether military or economic or whatever, and model it and notate it and analyse it, etc. I think there’s a big correllation between the concept of a Cousin’s/your game atom, Crawford’s verbs, and the idea of a move in Game Theory. There’s a lot of stuff not only about deciding what alternative to choose given a particular situation or set of variables, but also stuff about the most likely outcomes and what happens if the move is flawed (sort of like someone hitting button A when they meant to hit B on their controller).
As for the graphics side of things, I think it can be split into Information Graphics and Art. You’ve got the graphics that are there to communicate definite values, such as text/numbers or bars, etc. and then you’ve got purely decorative graphics that don’t really inform the player about anything that would affect their moves — but… it could be said that even the decorative stuff *is* informative when you talk about things like theme or genre. For example, you see a bunch of polygons headed towards your avatar. If they look like a rocket, and we know rockets are bad, we’ll move our avatar, but if they look like marshmellows we might have our avatar try to catch them instead. So there’s probably the info-vis style graphics, the decorative graphics of no consequence (borders or extraneous graphical artifacts) and then there’s the in-between which is where other media like fine art and film become ripe for borrowing from.
Methaphore and visualisation are very much intertwined. As Jim Blinn (computer graphics guru) said:
“In the past I have always thought of visualization as primarily a mental process: you receive some knowledge (from any of various sources) and, when you understand it thoroughly, you can “create a picture of it” in your mind. Nowadays computer graphicists are trying to place this picture more directly in the mind by creating the pictures with a computer. (This, of course, has been done for some time using more conventional illustration media). The term “visualization” has come to be a proper noun referring to the actual picture or computer image itself, as in the phrase “I created a visualization of the process on the screen”. Even though the visualization is on a piece of paper or a computer screen, the ultimate destination is the mind.”
So metaphor is visualisation is metaphor — we’re putting the same model into someone’s head whether through explicit graphics or what might be referred to as ambient visualisation.
One of the main reasons I think that there is merit in looking at how you map game to graphic is that by changing an aspect of the graphical presentation of a game, you actually change the game. The rules define what information players have available to them (“perfect”, e.g. Chess or “imperfect”, e.g. Quake, due to architecural occlusion — both Game Theory terms) in order for them to make their moves. For example, changing the game of Quake III Arena to a top-down game from a FPS makes it a game of perfect information. Players are now aware of the location of all other enemy avatars. This makes the gameplay less immediate and more strategic. Because you’ve changed the graphcis, or at least the amount of information presented to players via graphics, you’ve changed the rules and thus the game.
So there’s even a link between Information Visualisation and Game Theory. And then with Visualisation you have your link to Art and Metaphor, and then Perception and Psychology, then around and around we go 🙂
And no, boardgames are still games, but the medium is a board (and tokens, etc.) and is tangible. As you’d know with your history of boardgame design you’re only limited by your physical resources and imagination. Videogames take away the limitations on the former, and so it’s like having infinite materials with which to interface your game with players. If we want to improve the design of these virtual interfaces, we need to look at existing disciplines and extend.
So clearly things like perfect versus imperect information are applicable, but that’s a fairly high-level thing to borrow.
I think your Quake analogy is flawed, because upon analyzing the atoms that make up the top-down versus the 1st person version, you find that the core challenges are significantly different. By changing the perspective, you have done far more than change the access to information, you have also changed the action of firing from “clicking on a point in 2d space” to “orienting a vector of force along a plan perpendicular to view.” Similarly, the basic actions for movement are likewise different.
For your analogy to hold up, you’d have to propose as aspect of graphical change that does not affect the core atoms. Changing imperfect to perfect information could be accomplished by being able to see through walls, for example. Of course, in my atoms, available topology is intrinsic to an atom, so you’d be changing that in my model, thus still effecting an atomic change… Hmm.
As regards info-vis, one of the classic examples is the tic-tac-toe game seen as magic square puzzle… obviously, the metaphor is important to the ultimate player experience, as you point out. But we also know that players see through metaphor and focus in the end on the mechanics. The magic square example becomes problematic because it is bad info-vis — it imposes an additional metaphor on top of a space that really didn’t need it.
But I also think it’s a mistake to regard the decorative borders as being extraneous. These too have impact on the user/player, and there’s plenty to draw on in the field of graphic design that can help inform the design of these types of elements.
Any developer will tell you that regarding physical resources as infinite in the process of making a videogame is a mistake too. 🙂 Videogames only theoretically take away those limitations — they’re still a lot of work, and the fact that you can manifest a large array of what you might want via code (not, by a long stretch, “everything” that you might want) does not mean that you actually have infinite materials to interface with players.
I’m a fan of fairly complex progressive rock, and I’m pretty much in agreement with everything the sound engineer says. One of the best practitioners of high dynamic contrast in the pre-digital years was Conny Plank. Even when the music was merely adequate, some of the soundscapes Plank produced make the recordings simply magical to listen to. Although I’d recommend getting hold of Grobschnitt’s “Rockpomell’s Land”, you could as well hear some of the magic on more mundane (!) but more easily obtained recordings such as Kraftwerk’s “Autobahn”, Ultravox’s “Vienna” or Eurythmics’ “In The Garden”.
I think it’s all about mental stimuli without exaustion. We enjoy background complexity that we can focus on for short periods, but we like to have simplicity and repetition as a crutch to lean on. Simple rules, things you can learn to follow without pressure and complex designs to unfold in order to keep you preoccupied.
Classical music is very rhythmically complex — you just need to look at music composed in the late 19th century and later. From that period on, rhythm as well as harmony significantly surpasses just about everything you’ll find in popular music. It’s not really melody that’s any more complex, although it could be.
Very interesting article on music, though. I’m struck by the fact that the high level of dynamic contrast really characterizes one classical composer that practically everyone likes, even if they “don’t like classical music” — Beethoven. His music’s practically defined by it. Much the same could be said of the sturm-und-drang composers — a good chunk of Haydn and some Mozart, say.
“Taste” is interestingly subjective. People love the Three Tenors recordings, for instance, even though they’re largely in awful taste, and the superstar tenors involved would almost certainly not get away with singing like that in any situation with a conductor in control.
Hmm, I have to disagree on the classical music complexity part. Although we can debate whether 20th c. orchestral music is actually “classical” anymore. Also, by the time you hit the late 19th and then early 20th c., it’s starting to absorb a lot of the more syncopated rhythms from jazz and Latin influences.
When analyzing a piece of classical music, you generally look for a few things — the harmonic progression and the basic rhythmic statement of the piece. Something like the famous opening of Beethoven’s 9th, for example, has a rhythmic unit that is varied in length and tempo, but is still recognizable throughout the first movement. Compared to the rhythms found in be-bop, it’s pretty basic. There doesn’t tend to be much use of polyrhythms, for example, until later. IMHO, anyway. YMMV. 🙂
I think the audio engineer would point out that while much of the dynamic contrast is determined by the orchestration, a lot of it is also in the recording and the conducting, in the case of classical. That’s probably part of why certain recordings of familiar pieces are more prized than others… when you read his articles, he’s emphasizing dynamic contrast on levels most listeners don’t even think about, and beyond the usual layman’s (or even musician’s) use of the term “dynamics,” discussing stuff like RMS, transient 40hz impacts, and natural overtones.
The only difference in the controls of top-down Q3A and first-person Q3A would be that the top-down version doesn’t make use of the up-down axis for aiming at other avatars, and perhaps in that case jumping and crouching would also be redundant — you could still use the WASD keys for navigation. But that’s my point: by changing the graphical representation of the information in the game world, you *are* fundamentally alterning the game. However, it is not just the atoms/moves available to players, but the knowledge players have available to them that is also changed. It’s this second part that’s often overlooked in the literature, or at least not often discussed in the same breath.
I’m not saying a top-down version of Q3A is better, just different, and this difference can be expressed in terms of existing concepts in Mathematical Game Theory. Something like perfect/imperfect information is not high-level either — it’s one of the fundamentals of Game Theory, as illustrated by the classic Prisoner’s Dilemma example.
An aside: anyone reading “game theory” and thinking “ludology” — we’re talking about Mathematical Game Theory, not the critique of videogames from a socialogical/philosophical perspective. I agree with Chris Crawford that ludology is “a body of insightful work that avoids mentioning anything of utility to game designers” — see http://www.igda.org/columns/ivorytower/ivory_May04.php
I don’t regard all decorative graphics as extraneous, but in terms of Tufte’s data-ink principle, most decorative graphics aren’t necessary for player decisions as they don’t inform the player of anything pertaining to their next move. I believe decorative graphics are necessary because they assist with immersion, and perhaps in that case they are conducive to players being more focussed when making gameplay decisions, but again, it’s oft-cried that videogames have become graphical showcases and the gameplay side of things has been somewhat drowned. As Doug Church said, genre has become the new shorthand for game design. Genre that’s heavily determined by decorative graphics. Remember how the old classic videogames gave you enough information using only coloured squares? The rest of the game was a visualisation — but then it mostly resided in the player’s mind instead of the screen.
As for thinking virtual resources are infinite is a mistake, and that videogames only theoretically take away limitations, I’d point out that game design is theoretical anyway, until it’s realised as code. The only limitations are the imagination of the designer, the ability of the programmer, and how they communicate — how readily the design becomes the artifact. That’s the path I’m on, developing a framework for videogame design that is an extension of Game Theory and Visualisation. The glue, and ingredient that will hopefully enable the transition from design to code are the Design Patterns of Gamma et al. Anyway, back to atoms and such, I think they’re good, great, wonderful, but perhaps a focus on a distinction between game and graphic might be of some benefit.
The modern audio engineer strives towards utilizing the maximum fiedelity or the medium. (This is basically the same as making the mix the loudest possible.)
I wont hesitate to agree with the statement that lack of dynamic contrast makes for a less stimulating musical experience. A basic problem with modern music is however the length of the experience when consumed through the medium which promote them. (TV and Radio.)
The image of the perfect mix as a “wall of sound” has probably gotten somewhat out of hand, but with radio and TV double-compressing everything to fit their own audio profile a nice and dynamic mix often gets compressed to swing backward unless mastered to a non-dynamic volume level.
Youdaman, three things…
1) I would argue that by making that change to Quake, you are definitely changing a LOT more than just the information conveyed to players. The actual physical movements you do are altered; the challenges involved in shooting a target all change; the nature of navigation itself changes. If you do an atomic diagram of the game, most of the lower-level actions will be different. Don’t get too caught up in the information change (which is there, don’t get me wrong)–you’re changing things on multiple axes at once.
2) Tufte (and info visualization in general) is generally interested in maximum transparency; a game is not. Nor is all relevant info for making decisions the sole useful graphic content in games; presentation also carries the metaphor. Of course games these days pay too much attention to presentation — and simultaneously, they often pay not enough attention too. A typical flaw is good graphics tech and poor art direction.
3) Lastly, statements like
are why it’s good for game designers to learn to program — because I have to say this statement is just wrong. 🙂 Game design almost never operates purely in the realm of theory, any more than any other craft operates within the realm of theory. The limitations of code are very real, for all that they are virtual, and it’s not all driven by the limitations of the programmer.
Well you can modify just the graphics of Q3A to a top-down perspective whilst leaving the rest intact — see “Non-Invasive Interactive Visualization of Dynamic Architectural Environments” for a prototypical example, http://www.cs.virginia.edu/~gfx/pubs/archsplit/
The atoms don’t change if you’re talking about the actions available to players, it’s just that some player input (like moving the crosshair up and down) becomes redundant. It’s still a possible input by the player, but it’s of little or no use due to the change in the graphical presentation. So the only change is the visualisation of the Q3A information in this example.
Quite often a videogame is criticised for lacking a coherent graphical presentation. As I said before, the graphics of a game consist of both informative and decorative graphics, and some graphics meet both definitions, especially if adding to immersion that might suggest particular action a la J.J. Gibson’s Affordance Theory http://www.jnd.org/dn.mss/affordances-and-design.html and http://www.tech.purdue.edu/cg/courses/cgt112/lectures/gibson_affordance_theory.htm
I’m talking about game design existing as theory that needs to be applied in the form of code. I agree with you, and people like Chris Hecker and Chris Crawford have also mentioned game designers should at least be aware of coding issues so their enthusiasm for a particular ground-breaking type of gameplay can be somewhat grounded by what’s possible in terms of technology. However, much of the literature out there to do with game design doesn’t suggest a bridge between a particular set of rules for what a game is, and how those rules can be applied as algorithms. It’s up to the “multi-class” designer-programmers to create that bridge for better game design discourse. You’ve attempted to do it symbolically with your “Grammar of Gameplay”, and I’m attempting to do it conceptually in terms of an object-oriented framework or game-graphic mapping.
[…] Last, Raph points out “Think Like a Player”, a bunch of design tips for Interactive Fiction that are also illuminating for those making commercial endeavors. • • • […]
Tangentially:
Really what you seem to be saying is that European classical music before the Romantic era, uninfluenced by anything else, is less complex rhythmically. 😉 The increased complexity of counterpoint, both in terms of harmony and rhythm, as well as the decreased reliance on formal structure, influence of non-European music, etc. changes the way that later compositions are analyzed.
The classical recording world is interesting from the perspective of the sheer number of remasters that end up being done, over time, from the original masters (typically analog masters). The audio engineering approaches vary quite dramatically, particularly when noise-reduction techniques are also applied. I end up having, in my collection, multiple versions of the same historic recordings, as a result. (Performances end up being prized for their interpretive qualities, I think; ideally the sonics will be there, as well, but for collectors, performance tends to trump recording quality.)
I still need to read that site in its entirety… interesting stuff.