Going big?

 Posted by (Visited 9644 times)  Game talk
Feb 202007
 

So of course the SLogosphere is all a-twitter over the contingency measures announced by Linden Lab in order to prevent extreme lag. The method? Blocking logins to non-paying players. Plenty of commentary at Clickable Culture (Tony also has a nice reaction round-up post) and 3PointD , plus many more places I haven’t bothered to click through to yet…

ReBang frames the whole thing in comparison with the recently Scobleized new platform Outback Online, which apparently brings in some peer-to-peer technology in order to alleviate network load.

That said, it’s deeply weird to me that given the nature of the issue, the thing that the Outback guys are touting to Scoble mostly involve graphics:

1) The quality of graphics on Second Life aren’t good enough to do lots of things.


4) They see that by focusing on Windows only at first they can push the edge of graphics (and, they are working on an Xbox version too that’ll bring lots of people into this world). It’s among the world’s most graphically intensive C# applications.
5) Instead of hosting everything on centrally-located servers they are using P2P to get more people onto islands and bring better graphical performance.

Jeez, the issue is so so so not graphics…! Particularly not quantity of polygons pushed and what shaders are on said polygons… And I don’t mean the issue with SL specifically, but with virtual worlds in general.

This thing about all this that gets me in the emphasis on “big” as in big numbers, big landmasses, big high-end graphics engines… Haven’t we learned how little “big” really matters? The huge numbers of players are illusions — you don’t play with that many all the time anyway — never mind the controversies over whether the huge numbers are real in the first place. The high-end graphics are inconsequential — they are eye-candy for initial attraction, but all the biggest MMOs in the West have low-end graphics: WoW, Habbo, Runescape…

I am a lot more interested, in this point, in “right” than “big.” As in, right for the people who want to use the platform, be it a game or a world. It’s not about polishing chrome, it’s about whether it works.

  20 Responses to “Going big?”

  1. Grid Down for Maintenance Tomorrow (Feedster on: second life) 02/21 09:40 Augmented Virtual Reality (Feedster on: second life) 02/21 09:39 The registrations game (Feedster on: second life) 02/21 09:24 Going big? (Feedster on: second life) 02/21 09:24 Relay for Life moving (Whole life quote) ahead (Feedster on: second life) 02/21 09:18 Second Life – Pyramid Scheme? (Feedster on: second life)

  2. Agree. And that’s the reason I wrote the entry “Metaverse Now” back in December and why in the post to which you link I avoid graphics by saying “newer, more capable, more stable grid”.

    What I personally find most interesting is the centralized versus distributed methodologies. And those go to the point I made in that old post: users care about connectivity and the quality of that connectivity. If P2P delivers on quality, it could look worse than SL imo.

  3. Let’s hope that the Outback folks are victims of bad reporting and careless quotes.

  4. Well, Raph, I guess I want to ask you a more basic question?

    Do you think it’s possible to make a world with a centralized asset server that waits on 5000 plus sims…then 2000 more…than 5000 more…with the costs of physical space, electricity, cooling…and then all the management issues…then stuff like James Linden (Cook) wrote on 3pointD about how there’s all these tables of stuff with the “power law” of lots of people muting one person, the data calls to do that…then people muting two people…blah blah, you know all that coding stuff.

    Do you think this is possible and practical? I mean, won’t it crack somwhere?

    And won’t they be sorely tempted to just start a *new* asset server somewheres and put on a new set of 5000 and make like Eduworld or Bizworld and leave Blingworld behind?

  5. You can call WoW’s graphics a lot of things, but “low-end”? Especially when you mention Habbo and Runescape in the same sentence.

    Low system requirement? Sure. Stylized and requiring lower than expected polygon count? Right on. But, the graphics are still comparable to the mainstream games around the time it launched. So, the label “low-end” doesn’t really fit.

    But, you are still right: it’s not the graphics that prevent many activities in social games like Second Life.

  6. I would characterize World of Warcraft‘s graphics as "low-end", too. That does not necessarily imply low production quality though. I view Unreal 2007‘s graphics as "high-end" with ultra-high production quality. Could a massive online world use graphics of that caliber one day? That would be unreal.

  7. Too early to say whether it works. I could only make such a judgment by being able to build something in Outback, interact with others online, etc.

    I got a demo and reported what I saw. The rest of it will have to wait until the alpha test opens up. I’ll report more then.

  8. I think WoW’s graphics are definitely on the low end in terms of technical requirements, which is the context that I get from the Outback statements. The WoW art direction is phenomenal — but you can achieve a gorgeous look with very little in the way of graphics tech.

    Prokofy, there’s no doubt that there are limits — but I also think that there’s architectural choices in SL that give them extra burdens (that are a tradeoff for greater flexibility, of course). The number of avatars per simn, for example, is really really low, and it’s in large part because of the burden of all the scriptable prims. The issue isn’t the asset server, it’s the updates.

  9. WoW scales by keeping dynamic content to a bare minimum and storing all its canned assets on the client. Habbo scales by being 2D and pixel-cute. Runescape scales by looking terrible. They all promise a certain level of quality and (mostly) maintain that level. There’s a lot to be said for successfully managing expectations.

    However, even if you’re not going for the “big” poly-count or population numbers, maintaining a consistent QoS for any 3D VW that offers more than Runescape-quality graphics is going to be difficult with a typical stream-from-the-server-farm architecture. That’s where P2P has the potential to be useful (on the other hand, “consistent QoS” isn’t something I generally associate with P2P networks). The rest of the “push the edge” stuff in that Outback interview is just boilerplate marketing patter that I hardly even register anymore, I skim over it so fast.

  10. Raph,

    The C#/XBox bit would tend to indicate use of XNA. Effectively, this means they’re writing graphics under managed code which is a little like trying to get Far Cry performance out of java – not impossible, just (until now) really psychotically hard.

    I believe those bullets are intended to neutralise worries over performance under managed code. Hell, i had to do a full comparison test to prove that Managed is (provided you know what you’re doing) not usually a noticable performance hit and publish my results, and i still get questions and people expressing dubious opinions, so i can sympathise.

  11. BTW Prok, why do you care? Don’t you want those geeks and their geeky concerns out of your Second Life anyway?

  12. If they need revenue to quickly grow their network capacity why doesn’t Linden Labs just require customers to rent or buy virtual land from them? It seems a better way then just making a lockout when their concurrent user numbers get too high.

    I know if I went to login to a supposedly free game and I was locked out it would influence my opinion to continue playing. If instead I was just told that a small fee was required to play now I would probably fork it over. I guess I just like a simple “free” or “not free” rule instead of something that changes based on a always changing number.

    P.S. Why does Habbo Hotel and Runescape both have much higher numbers then Second Life when they all have the same free to play plan? I’m guessing that ease of use and hardware requirements must be holding SL back. I wonder if SL was web-based with lower end graphics if it would have concurrent users equal to World of Warcraft?

  13. Ahhh all those untapped client side CPU cycles could be useful and reduce load if we could P2P…oh of course throwing back end load management out the window is the price paid. At least from a network engineering perspective. Not sure about the games though.

    Wouldnt the objective of P2P be to maintain the highest performance possible? (and given the variance in end user systems) And if this was the case why would a high polygon count be more desirable? (or graphical load/density)

    Perhaps I’m missing something specific to game design since I’m just a DB/Network geek

    I think WOW did a decent job of stylized Art, in some ways its so excellent in its execution because of its cohesion with XML.

  14. For a while now, computing power is no longer an issue. Bandwidth is the bottleneck these days. It’s almost trivial to build a cluster of arbitrary size – it will only be limited by bandwidth.

    This holds for single computers as well – that’s why all the increases in L1 and L2 cache and other techniques to keep as much data as close as possible to the units that use them. If it were that trivial to simply slap 32 processors on a board to increase performance 32 times, such computers would be out over a decade ago. Why buy a P4 when 32 P2s will offer more power. Obviously, it doesn’t work that way, and there’s even no reliability issues to deal with, unlike networked systems.

    P2P addresses the wrong problem the wrong way, especially in MMOs. If anything, improving the graphics port will change the bandwidth of graphics card by 2-10 times, up to gigabytes per second with microsecond access times, whereas networks still have kilobytes with 100 milisecond access times.

    And under no circumstances may any logic of any kind ever be anywhere except on central server. If it can be altered, it will be. DRM hacks are proof of that. Even with strictest regulation, the movie DRM and various other technologies such as Vista authentication get cracked, hacked, bypassed and abused in a manner of days – and those companies bet hundreds of millions on such systems.

    There was a game that launched last year that claimed they have no problems getting 100k concurrent users. Unfortunately, they forgot to give players a reason to log in, so not only was this number never tested, but the numbers that they did achieve would be trivially handled by any other game (hundreds or thousands of concurrent users per entire cluster)

  15. This is one of the reasons I find Croquet so interesting. I haven’t done a whole lot with it – it’s still in the “construction set for construction sets” phase of its life, really – but I’m pretty confident that by being open and distributed and having a decent tool set, Croquet will eventually get big without necessarily being big.

    You won’t “visit Croquet,” you’ll “go hang out with your friends” and then “play some of the maze game” and maybe “hack together some interesting toys and take them to a sandbox.” The spaces in which these activities are happening may be more or less game-like, depending on the activity, but they’ll be immersive by being interesting and run atop a common platform.

    It’ll even be a lot more like the mechanics of the Metaverse depicted in Snow Crash than the large centralized systems. Your space runs on your hardware. You use your hardware to access other spaces using familiar metaphors, which it renders to the best of its capability. And in your own space, you can create what you want, and even play host to others. A lot of this already works today (modulo capabilities like NAT traversal). It’s just not as slickly packaged – at this time – as some of the centralized systems.

  16. @Chris:

    Got a URL? Sounds very cool

    I’m with Antheus that was a good analysis, I mentioned CPU cycles because thats the argument most frequently given by P2P fanbois, unfortunatly even accounting for bandwidth, and L1/L2 cache the hardware limitations prevent maxamizing CPU.

    In other words performance on any app is only as good as your worst piece of hardware (I/O) or rather a fast processor does not correlate to better performance….

  17. I remember thinking about a peer to peer distributed server engine back in 1990, in a meeting with the Kesmai guys to discuss how to make an Ultima Online out of the Ultima 6 engine. (That was the second attempt. Third time’s the charm I guess!)

    It seemed like an exciting idea to me at the time, a way around limited computing resource problems that were much more of an issue then. But by the mid-90s I concluded that being able to guarantee high connection quality, quick processing of commands, and low latency were more important. And that distributing critical server work to user machines would mean your ability to control and guarantee quality would go out the window, and the quality of a user’s experience would be highly variable, and random.

    I also have to agree with all of Antheus’s points too. Security and trust issues on a P2P game server architecture are huge problems. Problems I am happy to not be dealing with at all.

    I do like the idea of offloading some things like voice chat into a peer to peer context. Or if users want to use their virtual world for file sharing. Social functions may fit with P2P better than “gameplay” type functions that demand more security, low latency, and guaranteed quality.

  18. There will be a clearer separation between core and non-core, supported and unsupported processes, official and unofficial.

    Core, supported and official will be centralized and owned by the institution. Non-core, unsupported, and unofficial will be decentralized and owned by the community.

    Croquet sounds like an interesting approach to the latter.

    Frank

  19. So there’s these guys in a cave. And they got tired of looking at the shadows being cast by things, you know? ‘Cause the shadows of real things got kinda boring. You know; wolf, aligator, monkey. Yawn… So they started making shadows of stuff that wasn’t real. That was cool. That’s fun. Dragon! Yeah. That’s like a… giant aligator with wings! And… space ships. Excellent.

    Then, you know… The wall itself got boring. So flat. Guano stains. Old, crusty cave-paintings of bison. Bleh. Let’s make a new wall! Wicked…

    But if you want to make shadows of stuff that isn’t real, and you want to make it on a non-wall, you need a much, much better *idea* of what it is you want to do and why. IE, a brighter light.

    We know why we wanted shadows of real things on the real wall; we had no choice. And shadows of un-real things on the real wall at least leaves us with our original contexts. As we move into shadows of un-real things on un-real walls, ask “Why?” earlier, more often and more rigorously than “What?” and “How?”

  20. […] Raph Koster answers my question about whether SL can work. […]

Sorry, the comment form is closed at this time.