Cognitive_Modelling_With_Mathematical_Logic/Discussion
SDxWiki

JDH 8/31/2001

I just read (and re-read) this and it sounds great. Here's my take on where this fits in (I suspect that this is somewhat the same as what you're thinking but (a) I want to confirm that and (b) who knows, I might even have a valid idea or two!

In our space trading style game I imagined that this type of AI would be responsible for controlling an NPCs high level goals. These high level goals for a trader would be what to trade in and where to trade it; for a pirate they would be what sector of space to hang out in to mug traders; for a mecenary it would be where to go to have the best fight and earn the most for it. All these high level goals will be modified by different character traits of individual NPCs (is a trader highly moral, simply law abiding or a drug runner?; is a mercenary totally gung-ho (looking for the biggest fight) or more cunning (looking for the best fight that they think they can win?). The high level goals along with a range of believable character traits is essential for making an immersive experience.''

In the discussion you ponder (among many other things) questions related to where to insert the output of the model and when to update the inputs to the model to deal with the ever changing universe in which the NPC is playing. I think the CM for an NPC would be initiated and updated when they are in the space station - planning on how to best serve their high level goals. They could create a high level plan based on the information they know at that point - for example planning the best trade route as a set of hops between planets. They could choose to update the plan at any point along the way if they've acquired their own new information along the way or if other information sources have been updated (I'm trying to differntiate between first hand experience and second hand news). So, for the trader, he might have barely survived a mugging in his last hop but the police report which he had used as the basis of deciding to come this way said everything was fine - he might decide to treat police reports with more scepticism after such an incident. Having the NPC run it's CM while at a space station might have performance advantages - the state of the NPCs world is more static when he's docked so he can take longer to ponder his next moves - this also seems to map well with how real people might play the game.

In the John David Funge page above your examples are at a much lower level than this (aiming guns and so forth) - and below you talk about controlling the use of an engine dependent on whether it was too hot or not. I had imagined we would use other AI techniques for these relatively straight forward concepts (you're either good at aiming the gun or you're not - there aren't many character traits involved). I realize that some of the CM might be useful for deciding if the NPC should aim (and fire) the gun at all - but I think that those sort of decisions should be pre-programmed into the more traditional models before a stage is undertaken - in other words part of the output of the CM would be to set the parameters of how to respond to certain conditions in the next stage. Our trader might decide to proceed very cautiously because the planet he's flying to is a known hang out for a certain pirate. In my autoElite work I model a threat detection and threat response mechanism - these two systems can be programmed to create paranoid chicken-shits (everything's a threat and they run away a lot) or aggressive gung-ho maniacs (everything's a target and they don't care about the risks) and a range of stuff in between.

In summary: I see cognitive modelling as a way to work with expressing the complexities of these high level abstract concepts (why do I trade?). I see a host of other AI techniques to control complex actions and reactions at the much more concrete low level (I fly a ship).


DWM 9/1/2001

Thanks for the comments, and the time it must've taken you to wade through that page!

Actually, I am not trying to predetermine at what level the cognitive model should operate. Certainly, the "primitive actions" that the game engine supplies will themselves be complex. For instance, in the example of aiming the gun (which was taken directly from Funge's book), the cognitive model simply decides that the gun should be aimed. Funge's examples are mostly related to applications like a real-time first-person shooter. The actual act of raising the arm, locating the target, holding the gun steady, and pulling the trigger, all of which need to be modelled in that sort of game, are handled by the game engine.

Most of my examples in the design doc are at a similar sort of level, because I find that easier to think about at this stage. On reflection, perhaps this is because long-term memory and learning aren't as important at this level. I certainly do intend to try applying these techniques to higher-level reasoning, and I think you're right that at higher levels, less frequent evaluation of alternatives is needed. In a sense, there's a hierarchy of goals. Higher-level "thinkers" select high-level actions, which are actually goals for lower-level "thinkers". At some point, the longevitity and level of complexity of an action/goal reduces to a point where you can afford to hard-code the procedure for enactment into the game engine itself.

I hadn't thought much about modelling long-term memory yet, but I will start to. I'm postponing most thoughts about learning until I have a better handle on how the knowledge representations will come out. The chances of coming up with a viable learning model before you have a model that you can teach by hand are slim, I think. :-)

There is definitely some overlap between what you describe in your autoElite project and what I'm playing with here. I think Funge would assert that using mathematical logic to express the decision-making process will lead to more maintainable, malleable code, and I tend to agree. I've always thought that declarative forms of programming are less complex in many situations than imperative forms, and this project will give me a chance to test that out a bit. (Although I'm under no illusions that it will be a panacea -- I've written very large spreadsheets for home financial planning, and they were a bitch to maintain and debug. Hopefully that was a fault of the tools rather than the paradigm.)

Thinking about the higher-level, in-station planning made me realized that the character will also have access to a potentially vast database of things like galactic economic, navigational, and political data. Can't model all that stuff as fluents in this design, I don't think. I need a new category of data for "things that are part of the situation but that none of your actions could possibly change", so that I can come up with a different method of accessing it. Time won't be as critical for this level of planning, so some sort of interface to look up information as-needed might be workable. OTOH, these could be treated as "database query actions", i.e. just another form of "sensing action" that takes (relatively) little time and adds information to the situation. At this point, talking about communicating with the "game engine" is an oversimplification, since there will likely be multiple logical information sources to interact with. But we already knew that the game would, in a sense, have multiple "game engine" components to model different aspects and levels of the virtual world.