Coordinate Systems
SDxWiki

Eventually this page should be expanded to discuss the hierarchical coordinate system needed to efficiently deal with the large extent of space. At the moment, it discusses only smaller issues related to object modelling and rendering.

Rendering and Physics Coordinate Systems

Problem: The rendering system (DirectX 8) uses a left-handed coordinate system, which is common in computer graphics. Physics and engineering, however, uses a right-handed coordinate system, and so does our physics simulator. The vertex list ordering is different in LH and RH systems, because the ordering of vertices defines the 'outer' and 'inner' faces of triangles. Vertex lists generated by DirectX routines are LH lists. Eventually, the physics engine will need vertex lists, too, in order to define detailed body geometries for collision detection. Switching between systems is not difficult for individual points, since it can be modelled as a negative scaling of one axis, and this scaling can be included in transformation matrices that are needed anyway. But the vertex list ordering issue must be coordinated carfully. Most likely, the physics simulator and the rendering engine will have their own vertex lists, and reordering must occur when one is generated from the other.

DirectX 8 defines three tranformation matrices as part of its pipeline:

Changing these matrices causes a performance hit, so this should be minimized where possible. However, it's most likely even less efficient to regenerate vertex lists, or transform them by hand.

The World tranformation matrix translates from model space to world space. We'll assume that the vertex lists used by the rendering system are arranged for a LH system. (When/if the psim needs full model geometry information, it will keep its own copy of the geometry in a format suitable for its own purposes.)

We'll assume that vertex lists for rendering are generated in a LH model space. Position information for a body can be used to derive a transformation from a RH model space to a RH world space. But we need to get from the LH model space of the vertex lists to the LH world space of the rendering engine.

rend model => (LH->RH, phys model->phys world, RH->LH) => rend world

We will set up DirectX 8's World transformation to do this tranformation for each rigid body. The View and Projection tranformations will then work entirely in LH coordinates.

The View transformation has to be adjusted as the model of the viewer moves about in the physical world, so this should only need to be adjusted once per frame. The details of setting this transformation will vary depending on the camera view desired; often, the viewer's location and direction of view will start out being defined in terms of physical model coordinates, relative to a ship representing a player. In this common case, this transformation is needed:

phys model => (phys model->phys world, RH->LH) => rend world

The (phys model->phys world) transformation comes from the state of the player object, as usual. The view transofmration is most easily specified in DirectX using an eye point, look-at point (any point in the direction of view), and an up-vector. Each of these is easily specified in model space, and by applying the above transforms, we can transform the eye point and look-at points easily. The up-vector requires a little more care, however; it's not a position ('bound') vector, but only a direction ('free') vector. One way to deal with this is to define an 'up point', transform it, and then get the up-vector by subtracting the eye point from the up-point.

The Projection transformation must be set up to get the desired field of view, aspect ratio, and field of depth.

Hierarchy of Coordinate Systems

See also Astronomical Quantities.

It is not necessary or desirable to encompass the entire virtual universe in a single physical simulation. For one thing, simulating objects very far away from a simulator's origin point would require very large coordinate numbers, or else (using floating point numbers) the simulation would become pointless because kinetic effects would fall below the round-off threshold of the coordinates, and distant objects would not be meaningfully simulated. Also, of course, the sheer number of objects simulated would become unwieldy.

Nonetheless, some universal notion of position is necessary in many contexts. So let's design a hierarchy of positional information.

At the lowest level, a physical simulator operates under performance constraints imposed by the software and hardware, and is required to represent positions with a resolution suitable to what's being simulated. Let's assume a desired resolution of one meter, which seems reasonable for close-in work such as docking, mining, and (very) tight formation flying.

Single-precision IEEE binary floating point has a 24-bit significand. (Actually, this is stored in 23 bits, with a 'hidden' bit that is always one. If you're interested in the details, see 1.) Twenty-four bits can represent 16,777,215. (A sign bit is stored separately.) Using meters, this corresponds to only 16,777 kilometers. The Earth's diameter is 6,378 km 2. Is this big enough for a physical simulation?

Surprisingly, the answer can be 'yes'. Simulating an object that's almost three times the Earth's diameter from you (assuming you're at or near the origin of the simulation's coordinate system) with precision to a meter is probably more than good enough. A single-precision floating point representation could represent distances twice as large with 2-meter precision, four times as large at 4-meter precision, etc. There are some caveats, however.

Each physical simulation in the distributed system exists to serve an observer -- a player, or an AI agent. (On servers, there might be several observers per simulation.) The first caveat is that the observer's own physical simulation should be kept relatively close to the simulation's origin in order to preserve a large surrounding field of accuracy.

Another consideration is the magnitude of velocities in comparison to these distances. The speed of light is 299,792 km/sec. So an object travelling near the speed of light could traverse this entire high-resolution field in about 0.06 seconds! Few objects will be travelling at significant fractions of C relative to an observer. However, if high-speed collisions need to be detected, the first-pass collision-detection code may need to be carefully crafted to deal with such large velocities. If we assume that velocities are represented by meters per second, then note that the velocities themselves will suffer from precisions less than one m/s when they exceed 16,777,215 m/s, or 0.056c! (This corresponds roughly to 625,500 mph -- a tad faster than your average Cobra.)

The precision effect on high velocities can have serious ramifications. If a ship were to exceed 0.056c, the ship had a maximum accelleration of one m/s^2 or less (0.022 mph/s -- very low), and we used a stepwise simulation with one second granularity (which is quite large), then round-off errors could prevent that ship from ever:

This situation is very contrived, and not quite as simple as described here, but the principles apply. Care must be taken in the computations at high velocities. OTOH, it would take 16,777,215 seconds to accelerate to 0.056c at one m/s^2 -- which is to say, 194 days. Two ships accellerating at each other at this rate could halve the time to get this relative velocity. :-)

Suffice it to say that single-precision floating point ought to suffice for physical simulation. Certain extreme situations should be considered during coding, but may be so unlikely as to pose little practical difficulty. If that's not the case, then we may have to impose game limitations or perhaps use double precision math.