Ruminations on Security in a Distributed Game
DWM: Work-In-Progress, there are some background assumptions here that might not be good.
How do you create an "open" game with many servers, possibly different in revision or implementation, that can still interact reliably while addressing issues of game hacking?
You start by defining what client programs are allowed to do pretty broadly. Things like auto-aiming are not considered cheats; in fact, you provide a scripting language to encourage the development of such things. These simply represent superior automation within the game; this should be a rich part of the game, dominated by technical players, perhaps, but we would hope for an active market in such scripts. (Hopefully in game money, not the real stuff! Hmm.) This minimizes the importance of one common category of hacking.
The real security issues revolve around controlling capabilities owned by player-controlled objects (e.g. what engines a ship has mounted), and forcing client and server code to adhere to modelling constraints (e.g. the physical model doesn't allow instantaneous travel across a parsec of space). The latter problem is probably the easier one to solve.
Assume that we're using predictive modelling in the clients with transfer of state information over the net, which seems to be the preferred technique these days for dealing with unreliable networks. It should be possible for a group of interacting computers to detect if one of them is regularly violating the model's laws.
The key idea here is that an observer of the network data stream from a node in the network, given an understanding of the model's laws, should be able to detect if the state transitions violate fundamental aspects of the model.
Notice the similarity between this and the [Dead Reckoning technique]. Perhaps Dead Reckoning can be extended to compute not only DR bounds, but also Reckoned Feasibility bounds which, if exceeded, can be taken as an indication of hackery (or a bug -- nah).
True? Maybe not...
See also /Open Client