Dan_Muller/CM Notes
SDxWiki

For "less rough" notes, see Cognitive Modelling with Mathematical Logic.

Rough notes on CM design

Clipboard area -- old cruft

Our first propagation step applies constraints that narrow the domains of the successor situation variables based on the current situation variables. For example, assume that we're modelling the pilot of a spaceship. Our situation records include variables for our distance to a hostile enemy, our velocity relative to the enemy, our acceleration ability, and the enemy's acceleration ability. Then we should be able to come up with a constraint that defines a domain for our distance to the enemy in the successor situation for the next time tick.

Now assume that our initial situation's values for these four variables are {close}, {low}, {low}, and {low#medium}, respectively. (These names are actually all encoded as integers.) In the successor state, the estimate for our distance from the enemy would start out as unknown, i.e. {sittingOnMyFlokingWindshield#outOfSensorRange}. Applying our hypothetical constraint (which I won't even try to write yet), propagation might restrict this to {veryClose#close} -- obviously an undesirable situation that will influence our decisions!

After propagation is complete, we come to the distribution step. We have a list of action we can perform, along with precondition axioms and effect axioms. The precondition axioms tell us which actions are possible given the current situation. The effect axioms tell us what changes we expect if we perform an action, so that selecting an action leads to a narrowing of the domains of the successor situation's variables. So we distribute by "trying out" possible actions. Each attempt gives us a new successor situation, which we now treat as the initial

This is where things get really intereresting. Let's discuss some of the issues with the distribution strategy.

First, we need to limit the depth of the search tree. Always picking the "best" possible sequen Naive attempts to investigate every possible action in random order will be way to slow. Instead, we need to be clever about this. Also, we can't let the tree

An initial propagation step lets you deduce certain things about the situation. Ideally, nothing should fail at this stage, otherwise your knowledge of the world is just completely at odds with your domain knowledge in some way!

Based on the situation, you can determine what actions are open to you. So you distribute on these various actions. As always, the distribution step is critical, and I'll talk about this some more in a bit.

In each alternative computation space, you're modelling the effects of taking one or more actions.

Some constraints model what we know about how the world changes over time. They infer something about the possibilities of the next situation, narrowing the domain of variables in S0. (Remember that the domains of variables in S0 start out as wide as possible.)