Distributed Object State Updates
Assumptions:
- We wish to use Dead Reckoning techniques to provide a smooth simulation that trades off higher CPU overhead for lower network bandwidth.
- The abstractions used should make it relatively easy for game developers to come up with new distributed objects.
- The API should focus initially on physical simulation, but be extensible to other types of objects. Some of the discussion here will be explicitly in terms of physical simulation, in order to keep things concrete. (Further generalization may be done later based on experience.)
(Actually, this system might be overkill for non-physical objects, since these game components likely will not have the same need for real-time updating. However, it might end up being very convenient for some types of non-physical objects. But in any case, it should be initially designed to serve the needs of the physical simulation.)
Definitions:
- In Dead Reckoning, each node simulates all objects of interest to it. All nodes must have the same simulation code driving each object. We call such an object distributed. In a sense, all instances of the object on all nodes that are simulating it form a single distributed object.
- An active object is one whose state can be directly manipulated by a player. Only certain aspects of an object's state can be directly changed, however. This varies with the type of object.
- An object is owned by single node. The specific owning node can change during the lifetime of an object. An owning can override the simulation code of a distributed object by applying state changes to it outside of those predicted by simulation code. (See section below on influencing distributed objects.)
- Certain nodes have veto power over state updates. In essence, these nodes own the simulation, and can thus override the rights of object owners. This allows server nodes to monitor simulations and override apparent attempts at hackery.
(Section on influencing distributed objects moved to a Think Tank page, Distributed Physical Modelling.)
Limiting Bandwidth usage
The basic description of Dead Reckoning requires a node to, essentially, simulate an owned object twice: once respecting all player input, and once without respect to player input since the last state update it sent to observers. When the divergence between the two simulated states diverges by a certain amount, a new state update is sent to observers.
In its basic form, this arrangement causes an owning node to send updates to all observers. In cases where bandwidth is precious, as is often the case on a player's node, sending multiple copies of an update is undesirable. Also, due to firewalls (and other network topology anomalies?), it can often be difficult for two arbitrary nodes to establish direct communications with each other. Instead, the updates could be sent to an upstream server node, which can then distribute them to interested observers. Application of this technique could potentially be conditional on bandwidth availability. (For initial implementation, I recommend use of this technique unconditionally.)
It is the nature of distributed object state updates that later ones completely supersede earlier ones. In order to alleviate transient network congestion, our distributed object services will be able to recognize that a state update is:
- associated with a particular, identifiable object
- associated with a particular time When a state update is queued for transmission, it will replace any earlier state updates for the same object. This allows state updates to be dropped before they are transmitted, if network congestion is causing delays.
A single error threshold might not be appropriate for all observers. Observers that are far away from an object don't need to simulate that object with the same level of precision that others do. Taking advantage of this fact could reduce network traffic considerably.
Thinking out loud. Effectively, make the server the only observer of a player node's owned objects, with an error threshold that is the minimum of all the real observers. The server, however, has multiple observers (the real observers) of it's simulation of the object. So as its simulation gets updated by the owning node, it will send out updates as needed to individual clients. This places the processing burden of evaluating multiple error thresholds on the server.