Moore’s Wall: Technology Advances and Online Game Design
This talk was given as part of an IBM Games on Demand webcast conference in 2005.
Hello everyone. This is the first time I’ve ever spoken in this way, so forgive me if I seem a little awkward. It’s kind of strange staring at a camera while I do this. I wanted to talk today a little bit from a game designer’s point of view. I wanted to give you a little bit of perspective on how game designers approach the advance of technology and frankly all the headaches it gives us.
You know, I am really a game designer, and even though now I work in a more executive capacity, I’m really more interested in what it is that we can achieve with this medium, what are the things we can accomplish, what are the things that we can do, given the tools that the computer gives us. And in particular, since I work in online games, the tools that networked computers give us. I want to talk a little bit about something that I am calling Moore’s Wall. You’re all familiar with Moore’s Law — Moore’s Wall is basically the dark side of the advance of technology. It’s the headaches that it gives us, it’s the problems.
So, let me move on to the first slide here. Now, we’re all familiar with Moore’s Law. This is… originally formulated, it was originally the doubling of transistors on a given central processor unit over time, in 18 months, 24 months, whatever. Now, we have seen computing power increasing, Right? We have seen this improving, we have seen it rising dramatically, we’re all familiar with how fast and nice our computers have gotten. They certainly seem a lot nicer than they used to.
Now, along with this, our capabilities, what we can do with computers, have improved. What we’re doing now, this networked teleconference, is something that would have been fairly difficult thirty years ago on a computer, although it was remotely possible.
Even though our capabilities have increased, though, when we look back at things that for example, science fiction daydreamers dreamt about years ago, we’re still not quite there. Just as we don’t have flying cars, don’t have moon bases, computers also still haven’t really quite achieved everything that we daydreamed they might, and I think that it’s interesting to think about why this is. And it really comes down, in the end, to money. It comes down to the financial feasibility of accomplishing a given task. So let’s look a little bit at this problem, and about what it might imply for game design.
This here is a graph that’s probably familiar to most of you. I got this graph straight off the Intel Research website. It shows the advances in Intel processors over time. Now, this looks like a linear graph, but if you notice on the side, you’ll see that in fact it’s a logarithmic graph — as you go up on the scale, we’re adding zeroes onto the end of the numbers. So what we’re really seeing here is this exponential curve, this exponential rise in the power of computers.
Now, there’s been a lot of discussion about whether or not Moore’s Law holds true, whether or not we’re really going to continue to double in this way, but that’s not really the point. The point is that we’ve seen a tremendous advance in the processing power, and we’re going to continue to see increases in computing power even if these do not come on single processors. We’re seeing a move to multi-core processors, we’re seeing a move to distributed computing, and we’re certainly seeing vast increases in storage space, in media, in network connectivity and broadband bandwidth, and so on. The problem with all of this is that really, the Microsoft Word that you use today to do word processing is not really dramatically more useful as a tool than the Microsoft Word that you might have used a few years ago.
This could be summarized by using Nathan Myhrvold’s Law, which is that software is a gas that expands to fill its container. What this means is that if you give a programmer a really nice computer, he’s going to find a way to suck it dry and use up every bit of capability that that computer has. This is particularly common in the game industry, where for various reasons — we’re an entertainment medium — people want whatever’s exciting, whatever’s sexy, the eye-candy, the visuals, the speed — the result is that we’re constantly pushing the boundaries of computers, and we’re constantly developing games that force people to buy new machines, buy new computers.
So, what does this do to the games themselves? What does this do to the budgets? Let’s look at some numbers here.
This is a graph of historical game budgets. If we look back around 12 to 12 years ago, what you’ll see is that the typical game budget was well under 1m dollars. Now, this is for a AAA game title, this is for a best-selling, top of the line, double-page spread in the magazine, Game of the Year sort of game. And we’re talking $200,000 maybe. These days, in 2005, at the tail end of a console generation, we’re seeing that a AAA game is typically in the ballpark of $12m to develop. So we’re seeing an exponential curve in the budgets, as well as in the computing power, rising over time.
So what is it that that’s going to mean to us? It has a lot of implications. The next generation of console titles is going to be looking at budgets significantly higher than the $12m. Already we’re hearing rumors of the next-generation teams needing to consist of between 100 and 250 people in order to create a AAA title. That can be fairly daunting from a game designer’s point of view, because we have to make sure that everyone involved in the team is sharing the vision of what we want to accomplish. We have to make sure that everybody is working effectively, and project management on a team of 100 is extremely difficult. We have to coordinate all of the assets that are created, and all of the code, and make sure that everything works together and that nobody is going off in their own direction.
This becomes a truly daunting software management task, because game development is traditionally iterative. It’s not something you can plan out in advance. You cannot plan a fun factor. You have to instead work on the game until you feel the fun factor emerging from your work, and develop it into something that really becomes a hit title.
Let’s look at what this means. If you see back here behind me, over here, what you see is an old 8-bit computer that I keep around for fun, really. That’s the kind of computer that I learned to program on. It ran a 6502 chip, which is to say it ran at 1.79 MHz. Compare that to the nice 3GHz machine that I have under my desk right now. A AAA game on that machine cost about $50,000 to make. You could get it done in a few months, and your team was maybe between 1 and 3 people. At most, the asset load was around 64k, because that’s all you had on the disk. In fact, that was a luxury. Consoles at the time would typically offer only 8k to 16k, at most. So, we’re really talking about this computer sitting behind me, was really luxuriously large for developing software.
But here we are in 2004, and a game might cost $10m or more. Its asset load is up to 1.4GB on disk. This is the typical load that you can support on a console disk these days, is 1.4GB. Now, these $10m, a very large portion of it is going towards developing the assets that reside on the disk. The volume of code, which is to say “GAMEPLAY,” that you’re developing, hasn’t actually risen nearly as dramatically as the volume of assets that you’re having to develop.
It’s important to point out how inextricably intertwined assets and marketing are, because you need these assets in order to provide a marketable product. These days in a blockbuster environment, you really need something that stands out visually, stand out in audio, maybe it has name celebrity voice actors — the assets are what is getting incredibly expensive. So even though $10m is what is going into the game, well over half of it, perhaps, is going towards the development of art, sound, and music. Typically sound is the red-headed stepchild there, it doesn’t get nearly as much attention. And, a substantial amount of that money is effectively going towards creating graphics as a marketing tool, as a way to make the game stand out in a crowded marketplace.
It’s worth pointing out that this $10m figure does NOT include traditional market development funds. It does NOT include fees for end caps, fees for shelving, money paid for traditional marketing and advertising, and any costs incurred for PR. That’s JUST for the development of the game. This has a pretty significant impact on the economics of the business.
Let’s think about this historically. In twelve years, budgets have gone up by a factor of 22. Now, this is not, this is a figure that is already adjusted for inflation. And it is not a figure that includes marketing money, because marketing money has gone through a similar rise of its own, with the introduction of TV commercials as a standard part of marketing. Even though budgets have gone up by 22 times, the amount of data that we have to create for the games has risen somewhere between 40 and 150 times.
Now, that’s a pretty incredible number there. To some degree, we are pulling ahead. After all, budgets aren’t going up quite as fast as the amount of data. So we have gotten, over the years, six times more efficient at making data for the dollar. But we’re still on a losing trend here. We’re still coming up against a wall, which is going to make it very difficult for us to continue on the current economic path. As the costs continue to rise we’re going to have to change how we develop the games. And, because we’re talking about online here: online is really in the vanguard of this problem. Online is going to be one of the first places where we run into this problem.
So, let’s think about what is happening here. The first thing to realize is that game play elements have not really become more complex. And by that I mean, the game play that was involved in the games in the early 90s, and the game play that’s involved today, midway through the following decade — they bear substantial similarities to one another. If you look at many of the top-selling genres, you can literally take a game from ten years ago, and set it down in front of someone, and they won’t need to read the manual. You can take one of the latest first-person shooters, send it back in time, and the players of those days would probably be able to understand hat to do, even though their computers probably wouldn’t be able to run the game.
The thing about technology is that it has enabled a lot of really cool stuff, a lot of really cool visuals, in theory a lot of cool AI, and stuff, but the biggest effect it has had is to make game development more complicated and more significantly, more expensive. And that’s because the technology is primarily focused on presentation.
Let’s do a case study here. Hopefully you all remember this game here. 1994. You know, when you look at the technology behind this, at the time it was a revelation. At the time it was astonishing that we would be able to provide this kind of visceral visual to the player.
What was the game play? If you think about it, you’re really navigating a 3d environment, and you’re lining up a cursor in 2d space and clicking on something. In a lot of ways, it’s a lot like the pop-the-bubble-wrap games that go around in Flash, with the added mixture of navigating through a maze. Now, think about what that core game play experience is. It’s an incredibly fun, frenetic fast one, there’s no doubt about that.
Now, what was the evolution in game play in two years, as we moved to 1996? Well, the game involves running around inside a maze, and clicking on objects on the screen by lining up a cursor. But the guns are prettier. And the backdrops are nicer.
Let’s move forward a little more. The next game here, this game truly revolutionized first person shooters. We’re looking here at Half-Life. It introduced a level of story that was incredibly well integrated into the basic game play experience of running around a maze and clicking on objects on the screen in order to shoot them. So what we considered to be a major leap forward in game play — and don’t get me wrong, Half-Life was a true achievement — was nonetheless in terms of its game play complexity, not really a radical change.
It’s worth pointing this out, because as we advance through the history of first person shooters — here we are looking, I believe, at Quake 3 — we have to keep in mind that game play advances that complexify the experience typically create new genres, rather than refining a single genre.
It’s rare that we come across technology that allows us to truly rediscover the way a game might be played. In the case of a shooter, the biggest change the technology allowed us to do was to use the 3d space more fully, allow people to jump high distances, allow people to really, you know, stand on one spot, camp it, and shoot and frag people down below. But it’s been a long time since we got that. That came along quite a long time ago. And most of the technology advances since have really been incremental, and they’ve been in interesting areas, they’ve been in areas such as allowing users to create content that by and large isn’t as good as what the professionals do.
If we look today, we’re getting close to kind of the state of the art — you know we look at ’03, and Doom 3 —
— or we look at ’04 with Far Cry and now we have Half-Life 2, we see technology advancing, but the kinds of things that it’s providing us are better physics simulations, that allow us to move the crates around. And really, the puzzles that those things provide are not particularly sophisticated. They’re not really something that’s a true revelation to the game player. They’re still incredibly entertaining experiences, and I think we need to make the distinction here between game play design and experience design. When we look at the experience design of Far Cry, which is the slide you see now, it is of course, miles beyond that of the original Doom. But a lot of what we’ve done is improve the immersion factor and improve the standard ways in which we portray the environment rather than improve the fundamental game play.
So here’s the fundamental dilemma, and this may seem like a bit of a strong statement, but really the real issue is that by and large technology tends to curtail creativity rather than assisting it.
Now let me explain a little bit of what I mean there. Imagine that you were a painter, and you had access to every single kind of brush, every single sort of paint, every sort of pencil, charcoal, chalk, gouache, whatever — imagine that you had an infinitely sized canvas, larger than the universe. It would be awfully hard for you to arrive at what composition to place on the canvas. It would be difficult for you to think about what tools you should use in order to create that composition.
Creativity is enhanced by limitations. Creativity, innovation, is largely about finding solutions within a known problem space. When the problem space starts growing too large, you can pretty much start throwing anything at the wall, and it’ll stick. And in a situation where we don’t have a particular problem to solve, it’s just human nature to fall back on known solutions. It’s just human nature to do what we have done before, only to try to do it nicer. And that fundamentally is the limitation of advances in technology as regards game design.
Now let’s talk online games in particular, and what kinds of dilemmas they have.
The first thing to realize is, they have a much, much heavier content load than standalone games. These days the standalone game publishers tend to recommend that your standalone game experience have between eight and twelve hours of content in it. The reason for that is, most players don’t finish games. They move on. They play the next shiny thing. Given that, it doesn’t make sense from a business point of view to develop content for 200 hours of game play, when we fully expect the player to put down the controller and go rent the next game from Blockbuster. So, because of that, standalone games have had their budget increase but their game play time has actually been shrinking over the years as we learned more about players’ game play habits.
Now online games, however, rely on an ongoing revenue model. Whether it’s subscription, which seems to be the standard in the west these, whether it’s a new business model, such as release of content for a fee, such as Guild Wars is employing, whether it’s the current craze in Korea, which is micro-payments for digital items, we still rely on ongoing revenue. And this is because of the ongoing load involved in maintaining the online game. Because of this, it is still typical for online games to be designed with hundreds of hours of content. Even though the budgets have risen, our content load has kept pace. Unlike the console industry, where the budget has gone up but the amount of content has gone down — so that there’s a very clear development path there, “make it nicer” — we’re facing the dilemma that our budgets are rising, but not as fast as the content load.
Add this to the fact that online games are already far more technically challenging. Your typical console game does not need a database administrator. Your typical console game does not need to have the level of fault-tolerance that we need to have in our server infrastructure. So, we’re looking at anything from typically 5 to 10 times more difficult to develop than anything in a standard standalone or multiplayer game.
Lastly, already, because of these factors, our budgets are higher than that of standalone games. They typically run anywhere up to double. What has technology given us? For as long as I’ve been making online games, 40% of our CPU load has gone to doing path-finding. It’s been 10 years, and we’re still doing 40% path-finding. Standard A* algorithm, you know, nothing particularly surprising there, and it’s still where we spend almost half of our CPU budget. It always seems that the things we want to do, we don’t get more room in which to do them, because of all the additional load that continues to be incurred in these games.
So, technology is a canvas. We tend to fill it up with software rather than giving game designers new paintbrushes to work with. Now, new paints, new tools, when they do come along, can really lead to lots of very interesting innovations. So for example, we’re seeing the rise of physics simulation as something the game designers are able to make use of. And those are things are comparable to things that are extending the canvas, extending our tools. But by and large, what we’re doing instead is just filling up the canvas with more colors, more textures, more stuff to pile into our limited space.
The interesting thing about that, of course, is that it just makes it more expensive to create. The first massively multiplayer games I worked on ten years ago had budgets well under $10 million, and these days we’re hearing of AAA massively multiplayer games costing $20, 30 million dollars or well over that. So, we are facing a dilemma here because typically, a hit massively multiplayer game — even though everyone wants in this business, because of the recurring revenue — we have to keep in mind that the margins can get fairly tight. We have ongoing costs. And although the margins now are fairly nice, over time we are going to see margins decreasing, because this is a service industry and service industries — over time, in all industries, we have seen a historical pattern of margins decreasing. That’s just the nature of the business.
So we are caught in a very interesting dilemma as regards online games in particular, and I think it’s particularly epitomized today by the trends towards creating massively multiplayer games with the production values of single-player games, and frontloaded with heavily single-player-game-like experiences, heavily narrative experiences that often are 8-12 hours of content, that are then directly compared by the consumer to the standalone game that cost $12m to make. So there’s an interesting challenge facing us there.
Now, let’s put this a little bit into historical perspective. Why is it that the money matters?
Lately I’ve been about buying a tablet PC. I don’t know if all of you have seen these gadgets, but they’re pretty cool. It’s basically a screen, you have a digitizer tablet directly on the screen, you can carry it around, it recognizes handwriting… I do a lot of sketching, it seems like a wonderful computing tool for me. These computers right now are costing around $3000, and they have not yet truly seen widespread acceptance. But the thing is, Alan Kay had a tablet PC [design] in 1967. He called it the Dynabook, and they [worked on it] built it out at Xerox PARC. It [would have] cost tens of thousands of dollars to build just one of them, but they had one.
The problem in our business, in the computing business and any business really, is that innovation tends to run very far ahead of market realities. It takes a while for business real world factors to catch up. Think back again to those early computing days, also late 60s. Late 60s, early 70s there was the Alto computer. This was one of the first truly personal computers, and they had a lot of really cool stuff that it took many decades — it took until the Mac — for some of the innovations in the Alto to come along, and become accessible to a mass market audience.
Here’s the thing: the keyboard in the Alto was run by the same processor that is in this computer that I have behind me, that I am pointing at right now. The 6502. That’s how far apart the real world marketability was from the goals of the developers of the computers. There was a huge gap there, where the microprocessor that one developer thought was only suitable for detecting keystrokes, was considered suitable for running entire operating systems, at the time that it actually reached the mass market. So there’s always a long-running gap in between what we can achieve and what is actually marketable.
This of course leads to classic adoption curves. Now, you’re probably all familiar with classic adoption curves, but the interesting adoption curve that I really want to talk about is the game design adoption curve, which is also very heavily driven by technological adoption.
I worked on Ultima Online with Richard Garriott, quite a long time ago now — it’s been ten years since I joined that team — and back then we were very very worried about things like, “How many people will have Internet connectivity? How many people will be able to get a connection faster than 14.4? How many people will even be on the Internet as opposed to on an online service such as AOL?”
Of course, during those introductory segments, and with then the rise of EverQuest, and so on, we’ve seen dramatic rises in the growth of the acceptance in the massively multiplayer genre. The thing about game genres that we have observed repeatedly is that once they get past a growth phase and become mature, they tend to calcify. They tend to have a standard set of features that everyone accepts is the baseline, and everyone must mimic that feature set. By and large, innovation becomes heavily incremental. And the difficulty with this is that often it leads to a niche. It often leads to that game genre becoming so complex, so aimed at its particular fans, that it becomes difficult for it to attract new players to the market.
For example, in the late 80s to early 90s, the simulation genre was a huge huge part of the market, and over time we started getting manuals that were this thick, you pretty much had to be a subscriber to Jane’s Combat books in order to be able to play the helicopter simulations, and very quickly we lost the ability for new users to just come and pick up the game.
We saw something similar happen with real-time strategy games, where the boom that we once saw for that market has kind of gone down. It’s no longer quite as popular a genre was it once was in terms of percentage of the market.
This is a classic problem of game genres, that as they standardize they also become commodified to a degree. Instead of innovation being prized, what tends to be prized is presentation, and this causes budgets to rise but at the same time limits the audience. We typically see, for example, a stretching for the latest computers needed, which results in the standard consumer being unable to run the game on their home machines. So, what happens, then, with the advance of a game genre is that the innovation tends to get limited. It tends to get standardized.
So, looking at massively multiplayer, we’re talking already about the most expensive games to make, budgets running up to 3x the AAA budgets you saw, 400 hours of content needed instead of 8-12; traditionally there’s been less emphasis on visuals, but that’s changed — we’ve already seen the rise of the AAA caliber visuals really coming in, with our own EverQuest, with Guild Wars, with World of Warcraft, we’re seeing games that are matching the level of single-player games in terms of visuals. And then on top of that, we don’t get to skip the parts that make it massively multiplayer. We don’t get to skip the database support, the networking, the fault tolerance, and so on.
The question becomes, if all this cool technology is coming along, how do we leverage it in a way that’s innovative from a game design point of view? This is what I’m interested in, because innovation is what tends to drive new markets — so that’s why the business guys should care — and innovation is what brings fresh new game play entertainment experiences to users, which is why the customers should care. Technology is advancing, but isn’t necessarily buying us this.
What is it that computers are good at? Let’s think about that. What is it that the CPU really can get us? Computers are fantastic as simulation. They’re very good at modeling things. That’s a good thing for us, because games by and large tend to be simulated models of problems, simulated mathematical situations. There’s a reason why mathematicians like to break down games in order to figure out the math behind them.
The second thing computers excel at is algorithms. I don’t mean a clever solution to a problem, no — what computers are really good at is taking a brute force solution to a problem and running a billion times on something. Basically, solving small problems in a predictable way is something that computers really really excel at, and that drives people nuts. That’s the reason why spreadsheets were the first killer app for personal computers.
Lastly, the new thing that computers that become incredibly adept at is all the forms of networking. This isn’t really new, but it’s mass market acceptance is somewhat new. It’s interesting that from very early on, computers were conceived as networked entities and it’s only relatively recently that we’re starting to see the market acceptance of networking in game entertainment as being truly critical. So, given all that, let’s look at how a massively multiplayer game is built, and where these tools might be applied.
Now, hopefully many of you know roughly how it is a massively multiplayer game works. Let’s start over at the left. When you connect with your client, your client is really a dumb terminal. So, a lot like a web browser, really, except that it has on the disk, in most cases, all of the art for the game already installed on your hard drive.
Now, a multitude of clients using this artwork connect first to a billing database of some sort as part of the process of logging into the game. This is so that we can do authentication. Once you get passed through the login server, then you connect to what we think of as the game proper.
The game proper is actually running many copies of the same game. We keep daydreaming about the “shardless world” as people call it, the big world, but we haven’t been able to do that, and I’ll talk about why in a moment. So we connect to individual copies of the game. These are the blue boxes that are enclosed there.
Now, each of these individual copies is actually a server cluster. It contains multiple different servers running different kinds of computing processes. The first one is the user server. This goes by many different names, but fundamentally this is the part of a game server that talks to the client, accepts input, and tells you what’s going on. That’s all it does. It’s the traffic cop.
And then you have the simulations, which you could think of as the game play. And these are the game servers proper. These are the processes that decide that the dragon walks left or right, the game processes that know the fact that you unsheathed your sword, that you’re now attacking the dragon. The game processes that keep track of everything from the clouds moving to the creatures moving around. This is where 40% of our CPU is burned on path-finding. It’s on these individual processes here.
Now, each of these clusters is going to be talking to two different kinds of databases. They might not necessarily be formal “databases” in the traditional, relational sense.
The first of these is the static game database. This is all of the information, the static content, that people like me had to make to put together your experience. This is where we specify things like where the dragon lives, how useful that sword is, what stuff you might acquire over the course of your game play. This data only changes when designers update it. It changes when we choose to make changes to the game.
Now, this static game database is where all of that content budget goes. This is the expensive part right here. It is so expensive that we can’t actually make enough of it for all the users to have different data at the same time. This is why shards exist. It’s not because we can’t make a server that holds 100,000 people. That’s not the issue. There are in fact games that have succeeded at getting tens of thousands of simultaneous players in one world. That’s not the problem. The problem is that we won’t have enough for them to do. We won’t have enough creatures put down, we won’t have enough content. We won’t have enough interesting things. We can generate endless expanses of nothing, of empty terrain with nothing to do. What we can’t do is put interesting content down all over those maps.
The second kind of database that we have is the runtime database. The persistence database. This is the database that keeps track of what has changed about players and what has changed about the world. If the static game database is what allows us to create a fresh copy of the world in an initial state, the runtime database is the part that gets changed over time. It’s almost like the static game database is the line drawing that appears in a coloring book, and the runtime database is the part the players color in — and often color outside the lines on.
We run into limitations with the runtime database too. We’d like to provide more things that players can affect in the world, but technology hasn’t quite kept pace on this front either.
So, that’s kind of the key lesson that I want to point at here: most of the things on the left side of the diagram are things where technology can help us. It can give us more robust clients, ever more complex billing solutions, slicker login processes, handling more users, more game servers running more kinds of processes — the problems it hasn’t really helped us solve nearly enough are the static game database and the runtime game database, because the pace of content creation tools has not kept pace with the rise of technology, and the pace of persistence has also not kept pace. In fact, it’s actually retreated. Very few of the massively multiplayer games focus on having a high degree of persistence, even though that is our key unique selling proposition, is persistence. That’s how we justify ongoing fees.
Way back in the text mud days, there was a very clear split in game design between games that emphasized the static game database and games that emphasized the runtime database. A lot of the interesting innovation happened there because the technology for everything else you see on the screen, the game servers, user servers, billing, login, and client, was all locked down. You worked with Telnet and you worked with text, and you worked on whatever computer your university would give you. Usually you didn’t get all of it, either. You didn’t get all of the CPU.
Because of this, the place where designers had to get clever was in creating interesting new static game experiences, interesting new runtime experiences, such as user modifiability; such as live game masters that played along with players and affected their experience; such as the invention of things like user scripting, player housing, truly persistent worlds where the map itself lives in the runtime DB rather than being static data. An entire branch of game design in the massively multiplayer arena in text mud days didn’t even have a static game database at all because of the content creation costs and because of the particular design interests that people were pursuing.
So, it’s interesting to look back at those days when we had a limited canvas and compare it to now. Today, tech is solving many problems. It’s solving networking problems, billing infrastructure problems, authentication problems. What it hasn’t really given us huge leaps forward on are the issues of the runtime DB and the static DBs. And those are really the issues that we’re going to have to try to solve and figure out in order for our game play to continue advancing.
The current game play paradigm that’s probably best epitomized for everyone by EverQuest is a primarily static database game. We save off into the runtime DB what we tend to call an extended character state, which is to say, we save the information about a particular player and we save the information that’s related to that player. This might include the location of their corpse, so it doesn’t go away when the game crashes. We save things like, perhaps, the contents of an apartment, which is really nothing more than an incredibly fancy backpack or inventory on a player, but is represented differently within the game world. We save things like guild membership, which is really just saved by little tags on the character record.
By and large, an extended character state from a database point of view can be described in a fairly standard DB, where you have a bunch of fields and each one has a value. It’s a very static sort of setup.
The branch of online world design that we’re largely had to abandon because of technical problems is what might be called the more “virtual world” side of things, where if you drop something on the ground, it persists even though it does not belong to a given player. There’s many design challenges with this — players can clutter up the world. There are also a lot of expressive possibilities — players could build the world Lego-style, as many smaller games have done.
Now, we are seeing games trying to tackle these issues of the runtime DB and the creative expression that it offers designers, but the problem is that by and large they have to pick one of these two battles. When you look at a game such as Second Life for example, which focuses heavily on the runtime DB, what you find is that they’ve spent almost no money towards the static DB, because the cost of solving the problems of the runtime DB pretty much consumes all of their effort.
So — Broadly speaking, issues facing massively multiplayer games today, if you ask me what the core problems are that we need to solve going forward: first of all, runtime databases are just very limited in terms of what we save and how much we can save. The daydream of players is that each one leaves their mark on the world.
Sometimes we say this as “players want to be a hero.” But not all players want to be the same sort of hero. Some people want to be heroic because they found a new city, and others want to be heroic because they slay the dragon. Some people want to be the heroes because they create a long-standing group of friends, and that’s just as valuable an achievement.
These are all problems that tackle the runtime DB, and right now we just can’t do worlds that change very much. The biggest obstacle is actually the static DB. We don’t want to make worlds that change too much because it cost us so much to build the static world in the first place. If out of those $10m we spend $6m on building an incredibly beautiful detailed zone, well, we don’t want players to be able to dig trenches in it and make it ugly, and undo $6m worth of work!
The other problem is that we just can’t track very much data on each object, because we’re tracking billions of objects. Now, you might say, hey, “banks can do this.” Frankly, banks have a much bigger budget to solve this sort of problem. We just have trouble making highly variable objects. There’s a reason why, when you get a sword in an MMO, it’s basically the same sword that somebody else gets too, and that’s because it making a custom sword gets incredibly expensive from a storage point of view.
Now, on the other side of the equation, the static DBs. We’re spending all these millions of dollars to generate the content. Think about it: when 3d rendering first came along, first it was easy, you flat-shaded the polygon. Then you had to make a texture. Then you had to make two textures, one texture and one bump-map. Now we have, you know, all of these fancy sorts of normal maps and we have shaders, and specular maps, and all of these other kinds of things, and of course, the detail level of the textures has risen — by a factor of, oh, I don’t know, ten.
So, the content has become more complex, more difficult to make, more expensive, and then we can’t change it, because it takes too long to make. And then the march of technology renders it obsolete. It wasn’t that long ago that the coolest looking 3d games looked amazing and now we play those games from 1997 and we think they look terrible, because our technology has rendered all that incredibly expensive content obsolete.
That’s an interesting contrast to make with something like film. We often get compared to the film industry, but the film industry has by and large locked down its technology. Changes in cameras are fairly incremental. We’re seeing an explosion these days in CGI, but by and large the basic technologies in filmmaking remained very static until the recent developments now of digital filming and HD filming, which are not seeing a lot of acceptance precisely for the same mass-market expenditure reasons that we’ve talked about earlier: nobody wants to retrofit all the theaters to project digital.
Now, technology isn’t just a straitjacket. It does offer ways out of these things, ways that do focus back on algorithms, ways that do focus back on the strengths of technology.
How can tech help? First of all, as has become a major buzzword among the game development community, it can offer us roads towards procedural content.
Already, if you play a contemporary game, odds are very good that the trees you see in that game are not all modeled by hand. Odds are they were generated by a computer using an algorithm, using middleware, products such as NatFX and SpeedTree, for example.
The next thing that can help is sandbox design. This is something such as The Sims, games where users can make use of tools provided within the game itself in order to provide new kinds of game play experiences. This is a very algorithmic approach. It’s something that computers do very well.
Lastly, technology can help with user-created content. As technology marches on, the content load becomes more difficult to create, but technology can help in tools as well as in setting the bar higher. We’ve already seen that many of the most popular games on the market have a heavy reliance on user-created content. The entire genre of the first person shooter these days is propped up to a very large degree by user-created content. We should remember that 90% of the online game players out there are playing a game that was not developed by a professional: they’re playing Counterstrike, which was user-created.
User created content manifests in everything from The Sims to mods to frankly, developments of the IP, to people create fan fiction and comic strips, and so on. And technology can serve as an enabler for all of these things.
Let’s look at each of these on by one.
Procedural content is CPU-dependent. That’s what it most needs, is CPU. Now, I mentioned earlier how our CPU headroom isn’t actually rising as much as we would like. Cause it does seem like we keep packing in more content, and that results in path finding, for example, continuing to take 40% of our CPU budget.
To really make procedural content work, we have to go back and re-examine some of our basic assumptions about how these games are built. Probably the example that everybody is talking about these days, of course, is Will Wright and his project Spore. He went back to first principles and decided to say “Why must we look at static content his way in the first place? Why do we have to make the assumption that we’ve been doing it right all of these years?”
Another interesting film analogy would be to look back at early film. Early on, it was basically the filming of vaudeville; it was the filming of theater. There were established assumptions about where the viewer has to be in relation to the action, so the camera was placed so it could see the entire proscenium stage, and the action took place there. It wasn’t until we embraced the true capabilities of the camera and the filmmaking medium that we started seeing shocking things like close-ups, cuts, wipes, dissolves, dolly shots — these things suddenly exploited the capability of the camera and started to bu8ild a filmic language.
Procedural content is an implicit part of the “filmic language” of computer games, but we haven’t really embraced it as such yet. We haven’t really regarded it as a key element. Part of the major reason is, right now, we’re really bad at it. Right now, the kind of procedural content we tend to make tends to be the output of a very boring algorithm that makes generic and unsatisfying content.
Let’s not kid ourselves, a fractal terrain landscape that repeats endlessly is still just hills going out to the horizon. We’re not going to happen across the Grand Canyon until we get much smarter about how we design our procedural algorithms.
I happen to believe that we can be that smart, I happen to believe that there’s no doubt that this stuff can be done. But in order for us to tackle it that way, we have to consider it to a goal, we have to say to ourselves “procedural content is one of the strengths of computer games,” as opposed to board games, as opposed to other forms of games, and therefore we should regard it as intrinsic a tool as the close-up is in film.
It does have some advantages — it buys us in most particular, storage. And this is a pretty big deal, because as one of the big strengths of technology is a wired world, we’ve always been wrestling with the width of that pipe. You know, there’s a reason why the cell phone games are mostly clones of games from 25 years, and it’s not just the platform, it’s also what can we distribute, right? We have to worry about distribution channels.
As time is going on, we’re going to start seeing the rise of digital distribution coming along more and more. As that happens, the size of pipes — yes, it’s rising, but it’s not going to keep pace with these 1.4 gigabyte things that we want to send around. Really, these games are going to be significantly larger than the movies that people trade on BitTorrent, and the fact is, a lot of people can’t do that because it just takes a lot of bandwidth and a lot of storage. Most people don’t have terabyte servers at home. Yet. So being able to recreate and replicate data is going to be a significant tool for us over the short term, until pipes get really really large.
Let’s think about sandbox design. Now, there are a lot of advantages to sandbox design. The biggest is of course, it provides players with the tools to amuse themselves.
Game designers, it’s painful to admit it, we’re not omniscient, we don’t think of everything that there is that the players might want to do. Giving players the ability to invent things, come up with new ways to play the game, that can be pretty important to allowing people to remain players longer.
This isn’t really a new phenomenon. Let’s think to other games — let’s think to games such as soccer, football, depending which continent you’re on. There are a lot of house rules that develop. The fact that we have developed house rules to play soccer with only one goal in the street, that’s an expression of sandbox design. The rules of soccer proved flexible enough that players were able to adapt it and find ways to do it in different ways. We see this — things like rocket jumping in first person shooters is an example of sandbox elements coming forth and becoming a major element of game play. This kind of thing becomes self-refreshing content for self-starting players, and it is incredibly valuable.
But it does fail players who are most interested solely in directed content. Right now, that’s most of them. Most players come from an audience point of view. They want to be told what to do. They want to be given a particular problem to solve. They want to approach things in that way. Right now, even though this isn’t what really necessarily plays to the strength of the medium, a lot of the best-selling games out there are really hybrids between films and games. The interactive movie is here right now, and its name is Jak and Daxter, OK? It is games like that, which have heavy story elements with game elements interspersed, heavily directed content.
So that is where a lot of the audience is, and sandbox design has a place even in games like that, but as of yet, players have not yet truly broken out in a mass market sense to embrace broader sandbox design. When it happens, it happens big, as in examples like The Sims. But right now, those examples are still few and far between.
Lastly, user created content. User-created content is fundamentally the engine behind franchises such as Sims, Half-life, Quake, even Dungeons and Dragons. Even chess. Chess — a very large part of the chess community spends their time solving chess problems which are developed by other chess players. There’s no company out there creating chess scenarios to put out there, that’s just not how the community works.
The issue of course is, only around 1% of users create decent content. There’s Sturgeon’s Law that states that 10% of everything is crap. Now, the 1%, though, that isn’t crap, and that’s actually really good, is often better than the stuff that professional can make. So this is really an issue of scale. It’s really a question of how many people can we bring to bear on the problem.
Right now, we cannot bring to bear very many, because the tools suck. They’re terrible. The user content tools today tend to demand the same level of expertise as professional game development. Even the incredibly easy to use tools like the ones that ship with The Sims require the level of ability that game developers had 20 years ago. They’re still tools for game developers; they’re just not very advanced game developers.
In order for us to really unlock the power of user created content, we have to move the level of capability — the level of skill — that players need to have far, far lower on the ladder. The reason why something like Dungeons and Dragons has zillions of player-created modules, the reason why something like Star Trek has thousands of player-written stories, is because the act of writing is something that almost everybody is trained in.
In order to increase the amount of good quality user content, we have to push the whole pyramid up, enable more people to make bad user content by giving them easier tools. We can probably even make tools that will allow them to make content that isn’t truly terrible, because the computer is pretty good at detecting is something is outside of its usual parameters. That’s the genius of something like Lego, where you can’t really take half of a Lego block and attach it to something, unless you’re a real hacker.
In the end, if tomorrow desktops computers locked, and never changed again, for one reason or another — ATI, nVidia, and everybody else stopped all R&D, there was no more development of CPUs, if CPU power just froze and Moore’s Law came to a halt, we’d still have a hell of a lot of room to grow in massively multiplayer games and online games in general. And in fact, in all forms of games, because online games are just the leading harbinger of the future. Because the above three approaches, we’ve barely delved into.
The fact is, artists in particular have noticed that we’re approaching a wall ourselves with our visuals. We’re finding that right now, we’re in a very uncomfortable place called the Uncanny Valley, where the level of technology has driven visuals just to the point where they look real enough to look really really wrong, because the human eye is very good at detecting subtle mistakes. We’re at an interesting point now where all the games kind of look the same, because they are all using the same shaders and the same cool technology, and they all look subtly plastic, and subtly artificial. So we’re approaching this uncanny valley now, and we’ve barely tapped procedural content and user content and all of these other things that we should be doing.
Frankly, if CPU power locked right now, it’d probably force game designers and game programmers to have to get clever, and get creative, and get smart. So, in the end, right now we see Moore’s Law rising, but from a design point of view, it’s making us hit a wall creatively. But it’s a wall of our own imagining, it’s a wall of our own creation. We have the CPU power now.
Here’s what we ought to be doing in order to bring the next generation of massively multiplayer games, the stuff that people really want. We need to spend those CPU cycles that we’re gaining today, every day as tech improves, and spend them on simulation, on AI, and on better tools, and not just on nicer trees, prettier water, and more dramatic dragons. Because if we keep doing that, the market will become commodified, and commodified markets become limited in audience acceptance. And then, they stop growing, and people get bored, and they don’t stick. And we don’t make as much money. Then the field just won’t be as exciting.
So really, I think that’s the challenge that’s facing us. Let’s make the right choices about the capabilities that technology offers us. It isn’t just more pretty pictures. The future lies in more interesting games.
And that’s really all I had to say.
The first question is whether or not Sony Online uses external game development, whether we outsource to decrease the development budget.
The answer is, almost every game developer these days uses some degree of external game development or outsourcing, and that’s because a lot of the portions of the technology in game development have become highly specialized.
For example, a motion capture studio is not something that every developer has available. So it’s very typical these days for particular elements of a game to get outsourced. The question is what parts of game development get outsourced. So it’s very common to see, for example, motion capture gets outsourced very frequently. A lot of studios outsource their music, such as composing music, mixing it and so on — that’s a very common thing.
It’s very common to outsource art tasks these days, so there’s been a rapid rise in the growth of art houses all over the world. It started out with these art houses providing primarily level of detail work, so the concept work would be done in-house, and then level of detail work would be done out of house since it’s a repetitive kind of task and the in-house artists would then be able to work on the more creative work.
This outsourcing is happening all over the world. The question asked, development companies in Ukraine for example, with low development costs but talent and potential, how do these companies develop into external developers, and the answer is really, they develop great portfolios and they get their name out there. There is absolutely a market, because all of the game development studios are facing this dilemma, so — that challenge is a very common one.
Well, I’m not entirely sure what the question is asking, in terms of financial models — over the long term, for the financial models to really work, first of all we need to reach a point where the equivalent of the camera getting locked down in film happens for videogames. There needs to be established kind of a base platform, this is as much as you need to do, in terms of visuals.
I actually think we’re several years away from that still. There are still significant challenges. Film after all, is still exploring what they can do with rendering, and we’re going to lag several years behind film. So I think for the financial model to work in the long term, we need to lock down the technical platform. In the short term, we nee o explore solutions that aren’t an expensive. I listed some of them, but another one, for example, that we’re seeing now — there’s a lot of interest in nontraditional rendering, because it provides a distinctive visual look, it allows people to be visually immersed in something that’s incredibly fascinating and exciting, but at the same time, it doesn’t necessarily require the same load on the part of the artists. And instead, it’s primarily a technology investment.
Now, you know, there’s kind of a saying that says “art is expensive but individual artists are relatively cheap.” That’s because we’re having to pile more and more and more artists onto the problem. Something like nontraditional rendering, cartoon rendering, even cartoon art styles, can free up your creativity without forcing you to develop endless normal mapped models at incredibly high resolutions, which is very time-consuming. That’s one of the kinds of solutions that the industry is working applying right now, is finding ways for the financial models to work in that direction.
The other trend, which players frankly are gonna hate, is the rise of ongoing costs for games. Players are probably going to react negatively to this because they’re used to a one time purchase, but I believe we are going to see along with the rise in digital distribution, we’re going to start seeing the application of online game business models to single player gaming. We’re going o start seeing micro-transactions, subscription fees, and so on, taking place.
I think a real harbinger of this is Steam, which is Valve’s distribution method for Half-Life and their other games. That kind of platform, Sony Online has a very similar one in house, actually, that we use for digital distribution of our own, and we’re already seeing talk from the console developers about digital distribution in the future. So I think this is another path that people are pursuing in terms of financial models.
I think it’s actually already here. The AAA games are already heavily reliant on game soundtracks, and we’re seeing more and more of a close working relationship between the big publishers and the music industry.
It’s interesting because of course some of the strengths that the games industry is pursuing aren’t in fact the things that the music industry is having fits over these days, which is to say digital distribution channels and so on. I think it’s inevitable that music will become a larger part of the online game industry in particular.
This is something that’s near and dear to my heart, I’m a big fan of it, but I suspect that a lot of it will probably be in very connected kinds of ways. I wouldn’t be taking a cue for online games and music necessarily from Billboard, from radio, from traditional CD sales, and that sort of thing. I think it’s going to be a lot more like the kinds of uses that peer to peer networks are going to be put to, I think we’re going to be seeing things like streaming music. Users want to integrate music more closely into their own experiences.
We already have, a very common demand is for us to provide MP3 players so that players can play their own music collections during our online gaming experiences. So I think we’re going to continue to see that convergence, and I think it’s inevitable that user uploaded music is going to become a form of online gaming in some fashion. I could easily see an American Idol sort of game coming along some day, a Pop Idol sort of music competition. We already see things like Garageband.com. It’s very easy to imagine that kind of thing becoming an online game, with even online subscription fees. So, I think the convergence is coming; it’ll be a question of figuring it out.
Thank you very much for listening, and — and I hope you enjoyed it.
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, and extrapolated […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, and extrapolated […]
[…] “As many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, and extrapolated […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, and extrapolated […]
[…] “As many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, and extrapolated […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] many identified, getting laborious knowledge on recreation prices is tough. After I did my discuss “Moore’s Wall” in 2005, I did some fundamental analysis utilizing principally publicly obtainable knowledge on […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] tough to procure correct information at the prices of video games. After I did my lecture "Moore's Wall" in 2005, I did some elementary analysis the use of basically public information on prices, […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, extrapolated out […]
[…] 所以我这次的演讲就变成了《摩尔墙》的升级版。通过业内的一些关系以及网上查询,我整理了超过250个游戏的相关数据,横跨几十年。本文对数据的发掘会比这个演讲要更加深入,毕竟演讲只有25分钟。(这个链接是我的演讲完整PPT,但这篇文章是对于同样数据的更深入解读。) […]
[…] from 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the talk […]
[…] 2005 can declare to be nice sources in 2020, however he’s acquired an internet dialogue of “Moore’s Wall” that I’d nonetheless suggest giving a glance (mousing over the slides will show the discuss […]
[…] from 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the talk […]
[…] from 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the talk […]
[…] from 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the talk […]
[…] from 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the talk […]
[…] 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the […]
[…] 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the […]
[…] 2005 pueden presumir de ser un gran recurso en 2020, pero tiene una discusión en línea sobre “Muro de Moore”Que todavía recomiendo echar un vistazo (al pasar el mouse sobre las diapositivas se mostrará […]
[…] from 2005 can claim to be great resources in 2020, but he’s got an online discussion of “Moore’s Wall” that I’d still recommend giving a look (mousing over the slides will display the talk […]
[…] many pointed out, getting hard data on game costs is difficult. When I did my talk “Moore’s Wall” in 2005, I did some basic research using mostly publicly available data on costs, and extrapolated […]
[…] https://www.raphkoster.com/games/presentations/moores-wall-technology-advances-and-online-game-desig… […]
[…] Raph Koster od dawna mówi o centralnej przyczynie: budżety rozwojowe korelują z postępem technologicznym, a oba rosną wykładniczo. Prawie dwie dekady temu Koster wygłosił przemówienie, w którym odniósł się do tego zjawiska jako Ściana Moore’a. […]