Never trust the client

 Posted by (Visited 6988 times)  Game talk
Aug 132007
 

Gamasutra – Book Excerpt: Exploiting Online Games

  24 Responses to “Never trust the client”

  1. […] Raph 2007-08-14 03:33 작성 | Game talk Gamasutra – Book Excerpt: Exploiting Online Games 댓글보기 Game companies making movies  Raph 2007-08-14 03:30 작성 | Game talk Yves […]

  2. You can trust the client, you just have to manage them correctly, and ideally a contract that has a level of detail required for the project. Few people have the skill to do this well. But in general, “Never Trust Anyone” – until that trust is earned. Even then, you have to always be wise in dealing with humans, especially when money is involved. Money makes people insane.

  3. Wrong client but still good advice 😉

  4. I’m amazed that this is still an issue online game developers have to be told. As someone who’s been on the other side of that situation (purely out of intellectual curiosity, of course), it’s a bit jarring how often games will trust the client on such critical data as player location, ammo, etc. I suppose since this would never be an issue with MUDs where the telnet client couldn’t be entrusted with anything, it doesn’t occur to some people. It should certainly have its own chapter in any book on online game development though.

  5. From the article…
    “This might seem like a violation of privacy (and we believe it is), but it’s the only way games can hope to detect macro programs at this level.”

    AND

    “.. rumor has it that the game producers resort to taking full screenshots of your computer monitor, including everything visible at the time the shot is taken, and sending those shots home for analysis. Talk about an invasion of privacy!”

    It’s amazing how this author and hackers in general think developers have some sort of moral rulebook to follow once hacking is detected. Is there one I don’t know about? Because I always assumed, once hacking is involved it’s a streetfight and any rights players who hack have are pretty much out the window.

  6. Just a quick clarification, I think it is important that care should be taken that anti hacker measures do not take rights away from players who don’t hack. Once an IP or user has been identified as a problem, that when the developer should be allowed to take the gloves off.

  7. It’s not that simple, though, Gene. In some places, doing a process scan of a player’s machine might be illegal. Doing remote surveillance of someone’s monitor certainly could be construed to cross that line.

    Back in the UO days, we discussed doing things like process monitoring and the like, and decided not to based on the legal risks and on qualms about invading privacy that way. Blizzard, as I understand it, does in fact do process monitoring.

  8. Wow… Cheating in a game isn’t a criminal act, but stealing information about the user without consent most certainly is!

    Does Blizzard collect information about what processes users run???
    Makes note to self: never buy anything from that company.

  9. I have a slightly different take on letting the client be authoritative: Trust, But Verify.

    WoW for example, let the client be authoritative about the player’s position. This design avoided most lag-related problems that cause “rubber-banding” of the player’s position in other games. However, if the player’s client said he was standing in mid-air above the mobs of an instance, the server would believe it.

    Once cheaters started noticably exploiting this, Blizzard put in the Verify half of the equation. They logged all movement data on the server side, and did batch processing on it to determine who was teleporting without the use of hearthstones, flying in mid-air, walking up impassable walls, running faster than possible, and so on. Then they mass-banned those people (including some who had managed–without cheating–to run faster than their algorithm believed was possible, by combining multiple effects..but that’s another story).

    Anyway, here’s the idea I’ve had for years about how to handle this sort of thing: Allow the client some authority about its actions and outcomes (e.g. it can say “at this time I started moving in this direction” or “at that time I started casting spell X”). The client must also maintain a history of everything it has done recently and the circumstances under which it was done. Whenever it feels like it, the server can request this log data be sent from the client, and then process it in any ways it wants to detect suspicious events. It should randomly do this from time to time to everybody, but it should do it more frequently to suspected cheaters and it should do it promptly after any “suspicious event” involving the player is detected in real-time by the server. The analysis could be done off-line even, by other machines. To avoid sending lots of data, the client could regularly (e.g. once per second?) commit to the contents of its log cryptographically (e.g. by signing it with a digital signature and sending only the signature the server). That way, the needed bandwidth is kept to a minimum, and if the server sees fit to request certain log blocks from the client, it can check the digital signatures to know that the client has not tampered with them since they were recorded and committed to. In addition, the client could have the ability to report any “suspicious events” involving other players (which might be caused by lag, or by cheating). The server would then request the logs of all players involved and analyse them for systematic patterns.

    You also need a system for a GM to “shadow” a player who is suspected of cheating/botting and watch/record all their actions. You need the statistical analyses and pretty graphs and so on, for both computer and human to find behaviour patterns that are too precise to be a human (e.g. bots that do a certain action at regular time intervals).

    In summary.. rather than try to completely prevent cheating (which is impossible anyways), what we should do is try to make it as likely as possible that the cheating will be detected and punished, without erroneously punishing legitimate players.

  10. Blizzard’s process monitoring stuff is called Warden. It receives one of several custom anti-cheat routines from the server, keeps it in memory, and runs it periodically (every 15 seconds or so?). Blizzard changes it from time to time to keep ahead of the hackers, and uses several different versions of the code so that cheat programs will not know which one they face (i.e. even if their cheat program works on the cheat author’s machine and is not detected, there might be a different Warden routine running on some WoW players’ machines and that one might detect their cheat program). Warden does things like checking specific places in memory of each process for specific strings (which happen to be found in certain cheat programs).

    It also takes the title bar contents of all open windows and hashes them, and sends the hashes back to Blizzard. When they find specific cheat programs in the wild that have known strings in their title bars, Blizzard adds the hashes of those strings to their database of ‘bad’ window titles, and anybody who plays WoW while they have an open window with that title, stands a chance of being banned. (With so many users there are bound to be false positives, but I am pretty confident that at least 99% of the users flagged by Warden are in fact using cheat programs and breaking the Terms of Service and fully deserve to be banned).

    The hackers fight back by things like, building an on-line database of all known Warden routines, and having their cheat program capture the Warden routine and send it to the database for confirmation, and *disable itself* unless the database already knows that Warden routine and knows that it can’t detect the cheat program. The hackers can also tell when Blizzard makes changes to Warden, because after weeks of the database being sent the same routines over and over, suddenly it will start receiving a bunch of different routines that have not been seen before. The hackers will then disassemble these routines and figure out what they do, and share this info among cheat program authors, who will decide if the new routines threaten to detect their program or not, and possibly change the cheat program to avoid being detected (e.g. by “neutering” the Warden routines in memory, or by many other methods). Its basically an arms race between the WoW developers and the cheat community.

    Anyway, with Warden, Blizzard has to be very careful of the privacy implications and not to fall afoul of various countries’ computer-crime laws. But in general, I think Warden is exactly the right sort of thing. You can’t prevent cheating, but you can make it so *difficult* to cheat without getting caught, that very few users will be able to pull it off. Warden won’t detect a single hacker writing a program for his own use and cheating with it (as long as he is subtle); however, if Blizzard sees a pattern of a lot of people using a certain cheat program, they make sure to detect it in a future version of Warden and most of those people will then be caught and banned.

  11. Not to start a big discussion on cheating, but it’s certainly not a stretch to consider some types of cheating as equivalent to theft/fraud. In a single-player game, no problem, but it changes in the multiplayer realm.

    Think of a person in a store swapping sale signs in an attempt to gain advantage in their purchase. Not only are they defrauding the merchant, they are defrauding other customers as well.

    To me that seems reasonably analogous to the cheater in an MMO who is manipulating things (swapping signs) in order to gain advantage. I don’t think it’s so absurd to call fraud in such a case.

    The cheater is changing the rules in favor of him/herself and against all other players. If it’s a game that those cheated players are paying for, then they have suffered financially (if only nominally).

  12. Anyway, here’s the idea I’ve had for years about how to handle this sort of thing: Allow the client some authority about its actions and outcomes (e.g. it can say “at this time I started moving in this direction” or “at that time I started casting spell X”). The client must also maintain a history of everything it has done recently and the circumstances under which it was done.

    The issue with this is that you’re simply moving the trust from one part of the client to another. Instead of believing the client about its position, you’re trusting its report on the changes made to that position value. The “Verify” part of “Trust, but Verify” must be done on the server.

    Certainly, the use of cryptographic signing in the log would make things more difficult for hackers, but not by much since the signing algorithm and the signing key would be available to the client at some point. As you mention in your second post, the best you can hope for is to make it difficult enough that most people give up trying.

    Of course, that’s not an easy thing to do, as evidenced by your WoW example. For all the work Blizzard has put into their anti-cheat system, people are still out there cheating and adapting their cheats. Blizzard has also probably kept that community around longer, and made it more active simply by giving them a moving target. If Blizzard had left it easy to cheat, and easy to detect, the people who create the cheats would probably move on because the challenge was no longer there. Now they’ve made it a game, probably one more enjoyable than WoW to the people playing it.

    Also, with regards to the screen capture system, I believe certain versions of the PunkBuster anti-cheat system can take screenshots of the game at the moment cheating is suspected.

  13. The cheater is changing the rules in favor of him/herself and against all other players. If it’s a game that those cheated players are paying for, then they have suffered financially (if only nominally).

    This would probably be very difficult to prove legally, unless there was some direct monetary gain from the cheating, for the person/organization that was cheating (such as in gold farming operations). In many games that are not directly competitive, it would be difficult to prove that the cheating was detrimental to other players’ enjoyment of the game.

    Of course the argument could always be made that it is the company running the service who is responsible for delivering the value of the product, not the other players. Most cases for griefing are legally based on harassment laws, rather than defrauding the victim out of the service that they paid for.

  14. “Wow… Cheating in a game isn’t a criminal act, but stealing information about the user without consent most certainly is!” Assuming you are not a hacker, I would agree with you – a developer policy that treats everyone like a hacker is a problem. But once a hacker, I certainly feel no obligation to cry in moral outrage with you. I wouldn’t shed a tear if the developer was to use whatever evil technical means they have at hand to make you life a little more difficult. Of course, there’s a difference between what a developer would like to do and what a prudent, practical response looks like. Otherwise there might be a few more erased hard drives in the world.

  15. Does Blizzard collect information about what processes users run???

    Wow… Cheating in a game isn’t a criminal act, but stealing information about the user without consent most certainly is!

    Imagine a pickpocket approaches you in the mall, and without threat or malice says “I am going to take your money out of your pocket, and you will never see me again.” You say “No problem, go ahead,” and watch without complaint as he relieves you of your money and disappears into the crowd. Were you just robbed?

    Is Blizzard stealing your information if you accept the EULA?

  16. Todd Ogrin, shrink-wrapped EULAS aren’t worth anything in some countries so there is no implied consent. No. Extraordinary conditions should be made clear to the customer before the purchase takes place. And… not all extraordinary conditions are legal…

    Gene, in the case of hacking, which is break-and-entry, you should call the police. They might have the right to do surveilance of external systems, but only if there is evidence that there is a real crime. If someone, a game developer, collected information from my computer by without my consent I most certainly would report it as hacking my computer. Two wrongs doesn’t right one wrong.

    The same goes for fraud etc, call the police, developers do not have the authority of the police. Sorry. And only games which are 100% skill-based could come up with fraud as an argument in jurisdictions where gambling is forbidden. (And let’s face it, nobody can create a secure skill-based online game without DRM-like hardware control and sever restrictions on the game design.)

    No sane jury in a country with sane laws will accept the argument “some players gained some unfair advantage in my past-time-entertainment-thing, so I had to spy on my users in ways which even the police isn’t allowed to do”.

  17. No. Extraordinary conditions should be made clear to the customer before the purchase takes place.

    I definitely agree that game companies could be more transparent about their snooping practices. However, people barely read EULAs as it is. Who’s going to read one while standing in the aisle at Best Buy?

    And… not all extraordinary conditions are legal…

    Also agreed. I suppose if my pickpocket instead said “I am going to kill you,” things would be quite a bit different.

  18. The same goes for fraud etc, call the police, developers do not have the authority of the police.

    That’s a good point – If you’re a shoplifter, I (the merchant) cannot walk into your home and look through your closets to see if you have any of my stolen merchandise in your possession.

    So I guess the only choice developers have is to build tools to detect and/or prevent cheating (or other transgressions). But this, of course, is made largely impossible by the fact that cheats are carried out client-side.

    Given sufficient resources, I suppose you *could* develop systems that monitor and analyze the states of characters server-side and flag suspicious behavior … maybe that’s already being done. Looks like that’s basically what moo is talking about in the “Trust but verify” comment.

    I don’t know – I personally think cheating in these communities is a bigger deal than most other people seem to. The most vocal opinions (not here) about it always seem to lean towards “who gives a —-?” and maybe that’s the only way to really get through life. (Exhibit A: Enron, Tyco, and Worldcom.)

    Chances are that the cheating doesn’t stop with just games. After all, liars don’t tend to tell lies in just one setting.

  19. Chabuchi, I guess cheating matters a lot in PvP, but in PvE I don’t really see the big-deal. If players bypass content by cheating in PvE it pretty much is a statement about how much that content sucks in terms of fun.

    Designers can do server-side computatation where it matters, veracity checking where it matters less, and let the client detect tampering and report that it might have been tampered with to the server (in a disguised manner). Tamper checking has to be littered all over the client code, you can’t put it in a central place in the code. That just aid crackers and make their game more fun. On the other hand maybe developers don’t really care about cheating, but just want to give the impression that they do..?

    In general I think this problem would be less visible if designers knew more about technology and protocol design, and let their game design be informed by that. Blizzard had the opportunity to learn from Diablo given the amount of whining about cheating those games were associated with. It’s not like it is impossible to design tamper-resistant game mechanics.

    (disclaimer: I don’t play Blizzard games myself, so this is based on other people’s experience 😉

  20. But once a hacker, I certainly feel no obligation to cry in moral outrage with you. I wouldn’t shed a tear if the developer was to use whatever evil technical means they have at hand to make you life a little more difficult.

    There are non-evil technical means. They are called referential integrity and business logic. They not only make hacker’s life a little more difficult, they take them out of equation altogether.

    By same logic, banks wouldn’t need security. Why bother, just say that withdrawing from accounts you don’t own is illegal, and when people start doing it, send army against them. Or nuke them. Or something.

    And why would anyone bother shredding top secret documents marked as such. Make a law that makes it illegal to search through trash produced by governments.

    The responsibility here lies solely with the provider of the service. Or should malls also institute mandatory strip search and X-ray scans to make sure nobody stole anything? Together with mandatory search of medical history (don’t want kleptomaniacs in your store. And with criminal record check (you stole before? We don’t want you). And with finance check (can you even afford to buy things here?). And with genealogy check (your brother’s sister’s cousin, twice removed store something 17 years ago – how do we know you’re different?). And all of that on entry and on exit from the mall. And each time you enter and exit each store. Hey, if you have nothing to hide, you have nothing to fear.

    People will try to steal, cheat and lie. Make it impossible to do so, not beat them with a stick when they do it.

    Only game that could try implementing this would be CanOfWorms Online.

  21. The issue with this is that you’re simply moving the trust from one part of the client to another. Instead of believing the client about its position, you’re trusting its report on the changes made to that position value. The “Verify” part of “Trust, but Verify” must be done on the server.

    Well, obviously. =)

    But the point is that full verification is costly, so maybe we don’t do it all the time. Maybe we do it for randomly-selected intervals, and whenever another client reports seeing something that (from its point of view) seemed suspicious. Using cryptographic hash or similar means, you can force the client to commit to the contents of its “fully detailed log of events” within a short time after the event occurred—meaning that the client only has that short amount of time to forge log entries. Then if the server decides to review recent logs, the client is forced to send it log contents with the matching hash, which the server can then append its own recent log of events to, and submit the pair of logs to a queue for off-line processing by a different machine. Then if analysis of the log proves that the client caused “impossible” events to occur such as floating in mid-air, running or jumping faster/farther than should be possible, etc. then you have the evidence needed to ban them.

    You might only want to rely on a system like this for things like player positions; you could still make the server completely authoritative about things like, whether a weapon or spell hits or misses the player.

    Again, totally preventing people from cheating is probably impossible. But we can try to make it so that, regardless of how skilled the hackers are, it is very likely that they will get caught at any sort of active tampering. (Passive cheats like radars are also a problem, and for that I don’t have ideas except for Warden-type stuff).

  22. Passive cheats can be run from a separate computer so “warden” won’t cut it.

  23. But the point is that full verification is costly, so maybe we don’t do it all the time. Maybe we do it for randomly-selected intervals, and whenever another client reports seeing something that (from its point of view) seemed suspicious.

    Randomly farm out verification tests to other clients playing the game? Maybe have all the nearby clients constantly verify the movement of each other based on their inputs (since they need to know about each other anyway) and reporting their movement checksums back to the server. Might work, assuming everything is deterministic.

  24. All good points. Having run Sherwood out of my basement for a few years now, I’ve been through more than my fair share of hacker attacks and numerous complete rewrites of the backend. I wish cheating was the only motivation, because some of the attacks have been clearly intended to do real damage. Because this is a hobby turned business recently, it can be hard to take a deep breath, calm down and respond in a prudent (as opposed to vengeful) manner. Maybe I take it more personally than most developers because it’s a basement operation but I wonder what motivates someone to attempt to take down a free shockwave game. I’ve never “nuked” a hacker or attempted to acquire personal information – that’s more about just being a good human being as opposed to “legal issues”. But these incidents can leave you feeling pretty angry and I wouldn’t blame any developer who crossed the line.

Sorry, the comment form is closed at this time.