Hierarchical Coordinates
SDxWiki

See also Astronomical Quantities.

For amusement, the usual liberal sprinkling of mindless flames, and just a tiny bit of insight into other developers' thinking on these matters, see [this massive usenet thread].

Work in progress

Ignore the adjective 'hierarchical' in the title for now. The way my thinking is currently running, it may not be relevant.

Input from people who know something about rendering engines would be appreciated. What kind of coordinate representations and ranges can be dealt with?

Assumptions:

Bit requirements:

I anticipate three primary sources for physical objects to be modelled:

Minimizing computational and network loads is important. Simulation of physical objects is assumed to be a significant computational load. Dead reckoning minimizes network traffic by having each node in the network (that cares about physical objects) simulate all the objects of interest itself. Some of those objects may be in control of agents (players or AI) that are not know locally; each such object has an owning node whose information is assumed to be authoratative (barring detection of cheating, which I leave aside for now). The owning node simulates the owned objects just as other nodes do; but additionally it applies input from the controlling agent to determine the object's actual state. When the purely-simulated state and the controlled-simulated state diverge by a certain amount, updates are pushed to the other nodes that are simulating the object (which must have subscribed for such updates, indicating an interest in the object).

Since a given node may own multiple objects and may be the host of multiple controlling agents, it makes sense that a single physical simulator should be able to accomodate multiple owned objects when those objects share an environment. (An alternative might be to have a full simulator for each owned object; this would be architecturally simpler, but most likely more expensive computationally.)

The easiest way for me to explain and explore my thoughts on coordinate management as it relates to this architecture is to describe a scenario. Let's consider one which starts with a player on her home computer who launches her ship into space.

The client program determines which server currently owns her ship (remember that a controlled object is always owned by some node), and requests a lease on it. The server, presumably having already authenticated the player, grants the lease and gives current state data to the client. The client now owns the ship, and is the authoratative node for it.

The client creates a new physical simulator. The simulator has a global coordinate describing the center of the region that it simulates. All coordinates in the simulation will be relative to this origin. These coordinates have a resolution and range appropriate for the physical simulation; the range will be much smaller than that of the global coordinates.

Now the static star system database is examined. The scales of objects in this database are such that at most one system is close enough to the simulation's center to require detailed handling. The rest of the systems are processed to produce a descripton of a background, the star field that will be surround this region for the life of this simulator. The feasible range of Newtonian motion, and the size of the simulator's region vis-a-vis the distances to these star systems, preclude any need to dynamically model the star field. (Perhaps the star field can be reduced to a description of its projection on a bounding sphere or box. This representation could be shared with the renderer, which will probably make much more use of it than the physical simulator.)

DJH Practical note: When shooting starfields we always used a flat background. Scanning rates were determined based on pan rates where angles were treated as linear units. In/out left/right motion over short distances at sublight velocities don't require any changes to the view because the distance between the viewer and the star is essentially infinite, making the relative distance travelled infinitely small. So XYZ movements will not change what the viewer sees, only rotational movement will. One way of implementing this does not require a bounding box at all. Instead, all "close" objects would be rendered in front of a static screen with a shifting image. The starfield would be generic and tiled (for dogfight backgrounds, anyway - actual navigation computations would be a separate process that wouldn't involve looking out the window).

DWM I understood less than half of that, which is why I want someone else to worry about rendering. :-) But the fact that (Newtonian) translation won't affect the apparent star field is exactly what I was alluding to. And I'm with you on the nav comps comment, too.

Istvan If we are reliant upon the database of stars actually used in the game for generating starfields, we are going to see very sparse starfields. If we use actual luminosity information tied to the actual stellar data, a spherical starfield for any interior point using all stars within 50 light-years of earth will have possibly thirty visible stars. Near the edge of the simulated volume, this will be worse and there will be large expanses of "blank" space. I presume there is intention to use a pregenerated "starfield background" that is invariant at all points in the simulation? This might receive the local stars as an overlay, which overlay is altered by the simulation dependent upon interior coordinates, but which is only altered when the player-observer changes star systems (are we going to permit players to stop in deep space? A question for the Locomotion sections...).

DJH I'm taking a minimalist approach here. Or perhaps fractal. Yes, fractal - it sounds better :-). Anyway, it's an extension of the basic idea that space is homogeneous, that is, it looks the same no matter where you point your telescope, and within reason no matter how far away (or how far back in time) you look. There different types of objects will exist in approximately the same distribution in all fields of view (major ignoring here of local galaxy-scale issues - are you in a spiral, a globular cluster, and so on). So unless it's somehow important to the game that the starfield background is "correct", meaning you can look out the window and see the right stars in the right places, then a totally generic starfield projection is adequate for a stellar background. Think about it - in all the movies you ever watched that were in space local to our system, did you ever once examine the astronomical accuracy, or were you looking at the foreground objects? I'm sure you all watched Moontrap (NOT! :-D ); all the shots were from Earth orbit, but the starfield never matched reality. It was a 4'x6' piece of black velvet with thousands of tiny holes punched in it randomly and then backlight so light would come through all those dang little holes.

The point of all my rambling is this: players will be watching other things, like nearby ships, planets, moons, asteroids, etc., and won't give a rip about star maps. Therefore we won't need to map real systems for starfields, ever. Should we go out of our system, unusual objects such as nearby nebulae can be treated as special cases. And we can build up a basic astronomical database for dealing with travel, but I suspect we won't ever need to turn that information into visual elements. <getting back on topic> So I think our heirarchical coordinate system can be developed independently of the displays.</getting back on topic>

Istvan No argument. This is a point where accuracy and simplicity can both be served by restricting the environment to a single system. It would not, I think, be terribly difficult to render a background that is a reasonably accurate view of the "stellar background" as seen from our solar system. In fact, considering that the "fixed stars" have always been an absolute reference for navigation, doing an accurate background (once and only once) caters to Dan's apparent desire for players to be able to try seat-of-the-pants navigation. Even if your NavComp is damaged, a player could use the constellations as a frame of reference to help choose how far to set a jump with his S-space drive, if he knows generally where he is and has a "feel" for distances and the angular positions of major planets. I figure if I could trust the stellar background, I could navigate with two jumps to within an AU or two of any planetary destination, even if my displays were down, as long as the planetary objects were visible against the stellar background as points of light and my S-space drive is working....

DWM On this page, at least, I'm assuming for the sake of argument that we are trying to simulate the entire galaxy or a sizeable portion of it. The database would be very large, tens of thousands of systems (which I know is still only a fraction of the galactic total). Beyond the galaxy, a pregenerated flat background could be used to provide the additional visual effect of extra-galactic objects, which I believe is fairly significant?

When contemplating the insanity of a database with tens of thousands of systems, keep in mind that very little data should be needed for each, I hope. Position, visual appearance and size category (which could be reduced to a type code), and a unique identifier of some sort. Additional information (name, seed, ???) is not static and would not be needed for generating a background, so would not be cached locally. Also, the background generated from this data only changes during a jump -- see comments below to Frank.

Orbital objects? Separate simulator feeding the physical simulator?

Istvan It's probably inherently obvious, but it's the relative motion of natural objects in the system that are going to present the rendering issues as the player-observer moves through the simulation.

Radius of focus.

Edge event scenarios: Split, slide, merge.

The player ship's state information is translated to the simulator's coordinate system.

Frank Swierz Dan, I'm a little confused by the "simulation's center." I was assuming that we need a three dimensional grid overlaying each solar system? I'm thinking about something where point (0,0,0) is the location of the star (or focus point of a binary system?) Position of objects in space can then be positioned relative to this corrdinate system.

DWM Somewhere, yes, you need an overall coordinate system. Since on this page I'm thinking in terms of multiple systems covering (a significant portion of) the galaxy, the 'global coordinates' cover this. But each node runs its own physical simulator, and the simulator only needs to be concerned with object that you can directly interact with -- a much smaller arena. (If it covered more than a few light-seconds, then you'd by rights have to start worrying about time delays in observing events!) So there's no point in burdening the physical simulator with large coordinate representations. Use smaller coordinates -- but that precludes using the sun as a center, because then it's too far away!

Orbital object simulation math really wants to be working in a coordinate system somewhere between these two extremes, and I need to figure out how to reconcile that, hence the note above.

Istvan Possibly irrelevant at this stage in the discussion, but the orbital object simulation level is going to need to work in cylindrical (2 vec, 1 ang) or spherical (1 vec, 2 ang) coordinates. Depending on how we do orbital motions, cylindrical coordinates will require an additional conversion step, while spherical would likely not. Cylindrical coordinates have the advantage to being slightly more accessible to the average human brain - so our choice of coordinate systems, and any reflection of them in our interfaces, need to be driven by how much human interaction will be required as part of gameplay. On the overall (interstellar) level, we are going to have a rough time getting coordinates in anything other than Earth-centric, time dependent, spherical projected coordinates. The Near Star List I mentioned elsewhere, which only covers a spherical volume within 50 ltyrs, provided for game purposes only translated Cartesian coordinates - easier to visualize, but problematic to re-reconcile with a more complete astronomical database.

DWM Are you saying that the databases do not contain complete three-dimensional position information, but just apparent position from an Earth viewpoint ('spherical projected')? That would be unfortunate.

Istvan Yeppers, that's it exactly. You see... astronomers have no need to do other than assume Earth's surface is the frame of reference for all their work. I once made the naive mistake of pointing out to the astronomy department chairman after a class that what he had been describing as "purely abstract knowledge" actually had foundational basis to a forward-looking society that would be dealing with stellar bodies for navigation some centuries hence. He treated me as some kind of bug ever after, probably thinking I was some starry-eyed Trekkie. Damn it, science-fiction has the power to inspire kids toward the sciences... and we need more of that if we are going to get off this rock before we're knee-deep in our own blood, sewage and vomit. Sorry, didn't mean to get all political. (I'll leave the comment for a bit because it's an honest look at where I'm coming from with some of my idealism)

DJH Astronomers need to be Earth-centric because they need to adjust for the Earth's motion when aiming their telescopes. Given that even in a Sol-centric game our players will not be centering their actions around Earth, and that Earth has this nasty habit of moving relative to other objects in the system, I'd vote for an origin that a central to the system itself (the sun). "Relatively" speaking, when you're in a system, all things revolve around the anchoring star (no Ptolemic universes, please!). The sun can therefore be treated as a stationary object, as can all other stars and galaxies in the universe.

Istvan I would no more consider using Ptolemaic astronomy than I would contemplate playing the game on an IBM 8088. :-) I agree that Sol, specifically its center of mass, should be the origin of our coordinate system. This is another case wherein our problems are simplified by only dealing with one star system (for a while). Alpha Centauri will be interesting in that aCen A and aCen B revolve around a common center of mass.... But once we have built a decent simulation of Sol system, we can examine startegies for extending the methodology to more interesting systems.

DWM Sol-centered coordinates may be appropriate for one tier of the coordinate system. But part of what I was trying to get at on this page (except that my time has been hijacked by other things less fun than SDxWiki, and responding to these comments) is that the range and resolution desired even within a single solar system are too large for conventional processors to handle efficiently. As soon as you're over 32 bits, the CPU cycles start to add up. I may be overestimating the impact of this, but in my opinion a) it is significant, b) the problem is worth addressing for other reasons, and c) can be addressed.

What I'm driving at is that the physical simulator, which will have sometimes have a lot of work to do, need only simulate objects within a region of interest which is much smaller than a solar system. When yer buzzin' around Jupiter, you don't give a rat's ass about a sensor in orbit around Saturn. So by the time you're down to modelling object in the simulator, it ought to be possible to arrange things so that you're working in a 32-bit coordinate system.

The hard part is handling the fact that this local coordinate system will move. For a physical simulator running on a player node, it makes sense to have the user's ship at (0,0,0), and translate all other objects' motion and position into this frame of reference. You'll have to determine when other objects enter and leave the area of interest, but at least part of that problem is inherent to a distributed 'dead reckoning' system anyway. In the more general case, we may need to run physical simulators that track more than one player -- e.g. a server that's verifying the activity of three player nodes and serving as authoratative adjudicator. In this case, you could run a simulator for each player, but that would be wasteful. If you combine them into one simulator, then you no longer have a clear point of reference -- so instead I suggest that a simulator have a center that is static with respect to the larger 'global' coordinate system (whether that be a system-centric or galaxy-centric one).

Hence the reference to 'edge events': Split occurs if two objects that both need detailed simulation move too far apart to be contained in one simulator. (We have to assure that if this happens, they're also far enough apart to not care about one another, speaking physically of course.) Slide occurs when the physical simulator's center has to be adjusted to keep all the objects of interest within its bounds. And merge occurs when two objects of interest are being simulated in separated simulators (on the same node), but they have approached each other to the extent that they can interact physically.

See I was gonna explain all this, but ... oh, never mind, I just did. :-)

Istvan That all made excellent sense to me. I'm glad I don't have to code it.... Seriously, and this may not be as relevant as I'd like, but my first reaction is to try to assign the nearest large mass as the center of the local coordinate system. Notice that this is more liable to be impractical in a free-flight locomotion model than in a gate-driven locomotion model, because in free flight the statistical distribution of players through the environment is more homogeneous, while with a gate-based approach players will be vastly more likely to be near a gate object that itself (or whatever the gate orbits) can be the center of the local coordinate system. Note that the sector size used in JumpGate is so large that even given a true Newtonian flight model (with no velocity caps) transit times across the sector would be several hours (the JumpGate sectors are big enough to fit a coordinate system larger than a lunar orbital diameter - I think I did math showing that an entire jovian satellite group with parent planet could sit within one, IIRC). I visualize a system in which there is something like this as a potential local coordinate frame of reference - a number of "statically managed" regions like this, plus "spawns" that split off when needed and merge back. With an approach like this I'd hope to minimize the split and merge events. I don't know enough to know if the scheme is practicval - and our choice of locomotion methods may obviate it anyway.

DWM The split and merge events might not be as common as one might fear. The typical physical simulator is serving to simulate the physical environment of a player's ship. If another player's ship is too far away to interact physically with it, then the second ship doesn't need to be simulated locally. So a 'split' event might actually turn into a 'drop' event. If some automated object (like a missile) gets outside the player's physical sphere, then we could cheat and destroy the missile, or the missile could be handed to the server to take over as owner. (There are a lot of details to work out here.)

I concede that we might want some physical simulators to be static, but I'm sceptical. I guess this needs to segue to a discussion of the distributed simulation architecture. (My build finished, back to work for pay...)

Istvan Especially if we use a free-flight fast travel scheme, I understand and agree with your skepticism. That scheme seems to me to inherently mandate a dynamic approach to the simulator frames.

I like your idea of projecting the background star field onto the bounding box. Calculating the background images dynamically for any solar system in the universe might be difficult though.

DWM Computationally intensive, if we have a large database of system, but not particularly difficult -- projection of rays onto a surface. (By the time you get a basic renderer working, it definitely won't seem hard!) If things go the way I'd like, then jumps will take a signficant amount of time to complete, and we'll know the jump exit point when the jump starts -- so you can use the transit time to build this background image. We'll show the player some psychedelic representation of what S-space does to his mind while he's waiting. :-)

Random Thought: I wonder if we need more more positional accuracy than 1.0 meters. Would would it look like if you were sitting very close to another object traveling at nearly the same velocity and vector. If the speed difference was very small (maybe .1 m/s difference) then every three seconds the ships would appear to jump 3 feet releative to each other. Maybe I'm just worrying too much. Somebody help me out. :)

DWM Depends on the scales of the objects, I think, but you may be right. I just used one meter as a starting point to see what kinds of bit requirements I'd get. Since we get them in packets of eight, some horse-trading between resolution, range, coordinate representation size, and processing time is in order. I'm trying to work my way towards that.

DWM Addendum: I'm desiging our physics simulator so that it will be easy to change the data type used. All the simulators I've looked at so far, for games and otherwise, seem to use doubles (64-bit floating point, typically IEEE format). This has interesting implications. First, it has a wider precise range than a 32-bit integer. Second, it has a far wider range with a loss of precision. I suppose this might work out rather well. I haven't run the numbers yet, but I suspect that limiting the scope of a given simulator will be motivated by a desire to bound the number of objects, rather than by a need to avoid numerical explosions.