Software_Architecture_Ideas/Game Loop
SDxWiki

The following goes a bit beyond describing a 'game loop', but tries to pull together a lot of ideas concretely enough to guide some coding. I originally posted this on my scratchpad page, where Frank added some comments at my request. I've tried to clean it up just a little and organize it so that it will be easier to carry on an exchange of ideas. DWM

This focuses on control and data flow in the player program, specifically the real-time flight simulation portion. See also the older and more narrative description in Software Architecture Ideas/General Distributed Architecture for some context.

Let's assume you've just launched your ship from a station. The distributed game infrastructure has already granted you authority to control your ship.

Ship Control Object

Create an object to represent the ship. The interface allows you to interrogate information about the ship and control its systems.

nfs (re: ship subsystems as subobjects) So, all objects share properties like mass, volume, unit cost (assuming a player can buy/sell them), Manufacture (compatability switch?). SubObjects (like for instance an engine would also have thrust, etc..)

DWM Actually, since I'm describing the ship control interface, I was thinking of subsystem control interfaces. Some of the things you describe here belong in the control interface, and some don't, I think:

Rendering System Requirements

nfs Still concerned as to how we define "immediate vicinity."

DWM Yes, that will be difficult. We need to design a hierarchy of positional representations. Or we need to partition space artificially, which, as you know, I'm very loathe to do.

nfs I'm a little concerned about running the psim on a separate thread. I think that many of the advantages running in parallel will be outweight by the blocking necessary for the data gathering loop. In fact, I think that the local psim could be controlled (or paced) by the gui. That is, the gui would control it's update loop, that way the gui could gather the data for it's current rendering cycle without worrying about changing state of the psim. (who knows, maybe I'm just biased since I'm a gui coder at heart. ;) )

DWM I went back and forth on this in my own mind. However, in the end I decided it's worth trying on separate threads. Psim will be receiving asynch input from the network in any case, so having control input from the UI feed into the same queues doesn't add much complexity and probably negligible overhead. Synching on the entire system state (rather than on individual objects) will be easy and will also incur negligible overhead.

There are several benefits to using separate threads, I think:

Interestingly, in this approach, the rendering loop will have a time budget too! So it would self-limit its frame rate as needed. I have to check to make sure that the necessary metering tools are available to do this, though. I'll look into that ASAP.

User and network input processing

nfs (re: parenthetical comment on cheating) of course, you'd still need to accept input from the user (keyboard, joystick, etc...) and that's a potential source of cheating! Throttle indicated at 110% sir!

DWM Hehe. Actually, if you re-read some of the other discussions on cheat detection, you'll see that this sort of thing would eventually be caught by the server noticing that the ship's kinetics exceed the bounds of its capabilities. My thought in that comment was that the server, on detecting such activity, would revoke the user's 'lease' on control of their ship. What it does at that point is up to the developer's imagination -- flies the ship to the nearest constabulary, spins it faster and faster until it blows up, or dives it into the nearest sun. As amusing examples.

Object Detectability

nfs (Re: not rendering all objects) Probably just take into consideration the distance and size of the object to determine wether to render it or not. Question is an object twice as large as another visible from twice as far as the other?

DWM Well, I'd like something more complex than that, with different types of sensors, perhaps stealth coatings on ships, and the like. Also, I get the impression that I may need to explain the HUD overlays some more. You see, even if an object is not visible to the naked eye, and is too far away to even register as a pixel on your screen, it's entirely possible that your ship's eyes (sensors) will be able to see it and track it. In that case, I would expect a HUD overlay to show me the object in my field of view by putting up a (fixed-size) indicator on its position. I think you get this even in modern fighter jets with their radar systems.

And yes, object size needs to be accounted for. The Death Star can be detected from much further away than the Tie Fighters. :-)

nfs (Re: sensor parameters affecting detectability) the gui can be incharge of that. Object is 2 million km away, the current sensors only show out to 1 million km, so don't show it.

DWM Respectfully disagree. Again, I expect these interfaces to be available for AI and scripting engines, too.

nfs (re: occlusion affecting detectability) I've been reading about DirectX and it's support for Occlusion. I think we can do better than Jumpgate.

DWM That's fine for visual occlusion, but what about the cases I just described, where an object is too far away to render visibly? And again, what about AI and scripting modules?

nfs (re: psim helping with occlusion tests) Not sure I understand the psims role in occlusion testing?

DWM See above. It's a physical geometry problem more than a rendering problem. The renderer may perhaps do a more precise job of occlusion handling -- psim might only tell you "fully occluded" or "not fully occluded". The renderer obviously needs to do much more when figuring out how to display partially occluded objects. They're really almost separate problems.

(Much later...) See also comments on Scene Management. The physical model is usually used to cull the objects of interest for rendering, in order to reduce the burden on the graphics system.

Regarding occlusion, specifically, here's what I was thinking when I wrote the above (but evidently didn't explain well). For purely visual occlusion, the graphics engine can usually deal with it (modulo interesting issues like Z-buffer accuracy limitations, spherical vs. truncated pyramid view distance clipping, and spherical vs. linear fog modelling -- the latter causes problems, e.g., in Battlefield 2). But long-range sensors can "see" objects outside of visual ranges, usually depicted by HUD overlays. One thing that was sorely missing in Jumpgate was the ability to ambush someone, using asteroids as cover -- exploiting sensor occlusion. Since far objects like this never go through the rendering engine, sensor occlusion has to be effected using the physical model.

Psim Object Types

Coding Tasks

Following the above ideas, here's a list of things to do in rough priority order: