The following goes a bit beyond describing a 'game loop', but tries to pull together a lot of ideas concretely enough to guide some coding. I originally posted this on my scratchpad page, where Frank added some comments at my request. I've tried to clean it up just a little and organize it so that it will be easier to carry on an exchange of ideas. DWM
This focuses on control and data flow in the player program, specifically the real-time flight simulation portion. See also the older and more narrative description in Software Architecture Ideas/General Distributed Architecture for some context.
Let's assume you've just launched your ship from a station. The distributed game infrastructure has already granted you authority to control your ship.
Ship Control Object
Create an object to represent the ship. The interface allows you to interrogate information about the ship and control its systems.
- To allow for highly configurable ships, represent controllable subsystems as subobjects with their own interfaces.
- Use a COM-like interface pointer acquisition system, but use pure C++ for it, since server-based AI programs will be using these APIs too.
- Each subsystem falls into one of several predefined, hard-coded categories. Some subsystems might allow for multiple instances, though. Example: weapons systems.
- The client app will need a way to identify each subsystem, esp. in multiple-instance case, and associate it with UI elements and user input streams. This could get a bit complex -- this is where the client app programmer earns his pay. :) nfs Of course, this is also what will make the game rich and interesting.
nfs (re: ship subsystems as subobjects) So, all objects share properties like mass, volume, unit cost (assuming a player can buy/sell them), Manufacture (compatability switch?). SubObjects (like for instance an engine would also have thrust, etc..)
DWM Actually, since I'm describing the ship control interface, I was thinking of subsystem control interfaces. Some of the things you describe here belong in the control interface, and some don't, I think:
- Engine thrust capability and orientation relative to the ship model. Autopilots (of which there will likely be several types) will need this. Implicit in my description was that these control interfaces will serve any controlling entity: player UI, AI, autopilot, or script engine, as examples.
- Mass: Maybe for a cargo pod that's attached to the ship, since it would be useful for a controller to know what it could shed by ejecting. But I'd assume that a propulsion system could not be detached in flight, so there doesn't seem much point in including it.
- Unit cost: Definitely not. I think that ship configuration and trade systems will be very separate from the flight sim. Of course, something has to record the correct subsystem in a ship's configuration after a subsystem is purchased and added to a ship, so there is a connection, but it's a loose one IMO.
- Manufacture/compatibility: Again, once a subsystem is part of a ship, this is no longer an issue. Although again, objects like cargo pods might be an exception -- if you find one floating in space, you might need to know if it has the right kind of docking hardware for you to pick it up.
Rendering System Requirements
- fast access to large amounts of data on objects in immediate vicinity.
- position, orientation
- association with visual rendering data
- additional relevant state information that might modify visual renderings (friend/foe/unknown, emissions (shields/headlights), etc.) (tricky!)
- psim data should probably be directly accessed -- copying would be a performance hit.
- psim object state must be frozen while rendering cycle executes
- Any advantage to running psim on separate thread? Current scatter/gather mechanism would allow some execution to occur in parallel, psim blocks before updating object states (scatter) to wait for renderer to finish. However, this might clash with collision detection algorithms that need to step an object forward and back through time to determine exact collision time/location. Might be able to deal with this by copying the states of involved objects.
nfs Still concerned as to how we define "immediate vicinity."
DWM Yes, that will be difficult. We need to design a hierarchy of positional representations. Or we need to partition space artificially, which, as you know, I'm very loathe to do.
nfs I'm a little concerned about running the psim on a separate thread. I think that many of the advantages running in parallel will be outweight by the blocking necessary for the data gathering loop. In fact, I think that the local psim could be controlled (or paced) by the gui. That is, the gui would control it's update loop, that way the gui could gather the data for it's current rendering cycle without worrying about changing state of the psim. (who knows, maybe I'm just biased since I'm a gui coder at heart. ;) )
DWM I went back and forth on this in my own mind. However, in the end I decided it's worth trying on separate threads. Psim will be receiving asynch input from the network in any case, so having control input from the UI feed into the same queues doesn't add much complexity and probably negligible overhead. Synching on the entire system state (rather than on individual objects) will be easy and will also incur negligible overhead.
There are several benefits to using separate threads, I think:
- Psim has its own loop and has more control over the simulation steps it takes. I'm a bit concerned about managing the accuracy/CPU load factors in psim, and I think this will be easier to handle this way. As a for instance, on a really fast processor, psim might decide that it doesn't need to use its whole CPU budget to achieve the target accuracy. It might in fact be detrimental if it achieved much greater accuracy than its sibling sims on the net. (Not sure about that yet.)
- It establishes a model for adding other processing elements, such as AI modules. Each module can be written as a standalone thread of control, interfacing with other modules only where it needs to. Each can be given a budget of CPU time it's allowed to use, and can use its own strategies (which will vary wildly depending on the application) to try to meet that budget.
Interestingly, in this approach, the rendering loop will have a time budget too! So it would self-limit its frame rate as needed. I have to check to make sure that the necessary metering tools are available to do this, though. I'll look into that ASAP.
User and network input processing
- user input and network input result in state change requests to psim
- queue these, consume at the beginning of each new psim cycle
- time synch issues?
- For user input, probably not. Consume as soon as possible. (Later inputs may override earlier ones.)
- Networked input will be timestamped, related to dead reckoning. Psim must extrapolate desired state (taking into account timestamp on network msg vs. psim's current time) and compare to current simulated state. Either 'warp correction' or gradual convergence.
- warp correction - easy to code processing of network input, harder to handle collision consequences. Ignore collisions? Would require some state to continue ignoring collisions for some time period. (Consider warping inside another vessel, then gradually separating!)
- gradual convergence -- harder to process network input, but no (?) inherent collision processing problems. Since convergence could take some time, additional state is needed?
- Note that these scenarios apply only to ships not controlled by user!? (What if server notices user cheating? Seize ship control! Further state updates from user will be ignored. All that comes later, of course.)
nfs (re: parenthetical comment on cheating) of course, you'd still need to accept input from the user (keyboard, joystick, etc...) and that's a potential source of cheating! Throttle indicated at 110% sir!
DWM Hehe. Actually, if you re-read some of the other discussions on cheat detection, you'll see that this sort of thing would eventually be caught by the server noticing that the ship's kinetics exceed the bounds of its capabilities. My thought in that comment was that the server, on detecting such activity, would revoke the user's 'lease' on control of their ship. What it does at that point is up to the developer's imagination -- flies the ship to the nearest constabulary, spins it faster and faster until it blows up, or dives it into the nearest sun. As amusing examples.
Object Detectability
- Simply drawing all objects with the renderer is not acceptable. Objects which are too far away for visual detection by eye will be visible to ship sensors, and the rendering engine needs to know where these are so it can paint HUD overlays as needed.
- What is detectable is influenced by various factors:
- Ship sensor type, range, aperture angle
- Object emissions, reflectivity (vs. active ship sensors, and ? ambient emission sources)
- Occlusion by other objects (would be nice to get this right, where Jumpgate didn't)
- Model ship sensors as geometric objects in psim. Psim can report intersections, ships sensor system modelling code can then determine visibility. Psim must help with occlusion testing.
- Ship sensors thus develop a list of objects with relative position info, can be used for HUD overlay rendering, AI input, etc.
- Would be nice to model emissions and sensor spectrum characteristics, but would likely have to be kept very simple. Ad-hoc approach might be preferable for performance. Definitely don't try to model directional emissions/reflections from objects, that would add far too much complexity.
nfs (Re: not rendering all objects) Probably just take into consideration the distance and size of the object to determine wether to render it or not. Question is an object twice as large as another visible from twice as far as the other?
DWM Well, I'd like something more complex than that, with different types of sensors, perhaps stealth coatings on ships, and the like. Also, I get the impression that I may need to explain the HUD overlays some more. You see, even if an object is not visible to the naked eye, and is too far away to even register as a pixel on your screen, it's entirely possible that your ship's eyes (sensors) will be able to see it and track it. In that case, I would expect a HUD overlay to show me the object in my field of view by putting up a (fixed-size) indicator on its position. I think you get this even in modern fighter jets with their radar systems.
And yes, object size needs to be accounted for. The Death Star can be detected from much further away than the Tie Fighters. :-)
nfs (Re: sensor parameters affecting detectability) the gui can be incharge of that. Object is 2 million km away, the current sensors only show out to 1 million km, so don't show it.
DWM Respectfully disagree. Again, I expect these interfaces to be available for AI and scripting engines, too.
nfs (re: occlusion affecting detectability) I've been reading about DirectX and it's support for Occlusion. I think we can do better than Jumpgate.
DWM That's fine for visual occlusion, but what about the cases I just described, where an object is too far away to render visibly? And again, what about AI and scripting modules?
nfs (re: psim helping with occlusion tests) Not sure I understand the psims role in occlusion testing?
DWM See above. It's a physical geometry problem more than a rendering problem. The renderer may perhaps do a more precise job of occlusion handling -- psim might only tell you "fully occluded" or "not fully occluded". The renderer obviously needs to do much more when figuring out how to display partially occluded objects. They're really almost separate problems.
(Much later...) See also comments on Scene Management. The physical model is usually used to cull the objects of interest for rendering, in order to reduce the burden on the graphics system.
Regarding occlusion, specifically, here's what I was thinking when I wrote the above (but evidently didn't explain well). For purely visual occlusion, the graphics engine can usually deal with it (modulo interesting issues like Z-buffer accuracy limitations, spherical vs. truncated pyramid view distance clipping, and spherical vs. linear fog modelling -- the latter causes problems, e.g., in Battlefield 2). But long-range sensors can "see" objects outside of visual ranges, usually depicted by HUD overlays. One thing that was sorely missing in Jumpgate was the ability to ambush someone, using asteroids as cover -- exploiting sensor occlusion. Since far objects like this never go through the rendering engine, sensor occlusion has to be effected using the physical model.
Psim Object Types
- Active objects (controlled by players or AI, local or remote -- no distinction)
- Passive objects (affected by interactions with other objects? how to model this, taking into account authorization concerns? look for other notes on wiki) (distinction between these and active objects is questionable)
- Massive objects -- modelled by orbital descriptions only, deterministic movement, no force modelling needed.
Coding Tasks
Following the above ideas, here's a list of things to do in rough priority order:
- Start with an app that has a psim object and a ship control object.
- Assume that to-be-written init code creates the initial psim at the right location in the universe, turns ship control over to user flight app code at an initial position in this sim corresponding to the initial state given along with the authorization to control it.
- Psim should acquire data about the initial state of all objects in the simulated region.
- Initially, hard code or load from file.
- Eventually, this comes from a combination of static database (massive objects) and distributed state system (active and passive objects), and includes code to register 'interest' in active objects.
- Arrange so that psim runs on a separate thread, with an API for local rendering engine to lock and access current state.
- psim controls its own step timing, influenced by input from outside (CPU budget) and accuracy considerations (try to take small sim steps, within budget)
- eventually, step timing should be adaptive, monitoring its own CPU usage. make sure this algorithm stabilizes rapidly at startup and again after perturbations
- Add an update input queue to psim
- No distinction should be needed between updates from player app threads vs from network.
- however, there are different types of updates:
- Controller updates adjust internally generated forces, are not timestamped, and are applied ASAP (but no sooner). These usually come from local player input, but in some situations controller might be remote. These should be received only for objects controlled by the local node. (Does psim need to know/check this?)
- example: server controlling docked ships or other 'linked' objects with disparate controllers. User apps sends contoller updates to server.
- State updates. These come (possibly indirectly, via a server) from a psim that controls an object, and denote updates to an object's 'true state'. They are timestamped and retroactive. If psim's time is ahead of timestamp, true state for current time must be extrapolated. (If psim's time is behind timestamp, something's wrong!)
- Controller updates adjust internally generated forces, are not timestamped, and are applied ASAP (but no sooner). These usually come from local player input, but in some situations controller might be remote. These should be received only for objects controlled by the local node. (Does psim need to know/check this?)
- psim doesn't care about the ship's control subsystems, just their effect on the ship's physics.
- Add basic control subsystems to ship control object.
- Propulsion and attitude control systems
- Provide APIs that allow UI to easily specify throttle setting etc, prob. in terms of percent thrust.
- Requests get translated into state update requests to queue to psim.
- Message queues to psim
- queues should be "smart": cull messages that are superseded by later messages.
- controller updates are culled by only keeping latest update for a given object. (OK, maybe they do need to be timestamped to avoid problems with out-of-order network delivery)
- state updates are culled by keeping only latest state udpate for a given object.
- Hmm, culling is actually almost identical for these two types.