The decision tree is a discrete process, how do you see
the continue process? (updated 16-05-03)
First, a definition of ‘discrete process’ and ‘continuous process’ is given so that it is sure that there is no ambiguity.
By discrete process, according to the input point of view, it is meant that inputs are interpreted in a discrete way. To understand what they mean, conditions are checked. Fixed values are used. A condition is satisfied or it is not. No intermediate value is accepted. According to the output point of view, the output values are discrete and the set of outputs is limited. Naturally, a ‘continue process’ is the opposite idea: parameters with intermediate values are allowed, and the set of outputs can be infinite.
Let us come back to the prototype: a decision tree is a discrete process. The reasons why this process has been chosen for the AI is discussed in the corresponding paragraph. The AI inputs that are ’events’ are interpreted in a discrete way thanks to the decision trees. The process uses discrete values. As the branches are different and as conditions are exclusive, by running the tree either one situation or another one is recognized. You cannot be between both. Of course, several situations are reachable but they are clearly different. For sure, a pilot is able to make a compromise between two points of view: between ‘a perilous and safe situations, there is a medium one’. To simulate such ability the easiest way is multiplying the situations that can be recognized. In few words, to deal with an infinite set of situations, as many sub-situations as possible have been implemented. For instance to tell if the attacker has the positional advantage, three standard geometries can be recognized: ‘Good position, medium position, and bad position’. It is right now the simplest way to face the problem.
It is almost the same issue for how AI outputs are generated. By running the tree, you are either in one branch that leads to a decision or in another branch that leads to a different decision. So there are several types of decision. Except in this prototype where decisions (‘maneuvers) have been made closer to ‘basic actions’, they should still remain high-level decisions. Maneuvers are tendencies, policies that the pilot has to follow. It is the layer below the AI that is in charge with providing a set of continuous outputs (‘basic actions’) from a ‘ single decision’.
Of course, other solutions could be adopted. For instance, fuzzy functions could be an interesting alternative. We wished we had had enough time to test this solution.
NB: it will be discussed further that trying to make ‘maneuvers’ close to ‘basic actions’ is the root of a lot of mistakes in the prototype.