Pilot’s Associate Program: Technical Summary
Technically, by 1991, the PA was the most advanced, working, real-time intelligent system of its day and remains unsurpassed in the world. We successfully integrated 6 expert systems operating in real time in a realistic (some would say too realistic) combat simulator. The knowledge implemented in each component of this system was realistic, combat experience that was demonstrably applicable to the operation of combat aircraft today.
- System Goals
- System Behavior
- System Integration
- Designing for Real-Time Operation
- Plan-Goal Graphs
The overall objective was to recommend in real time (with a response time around 0.1 sec) to the pilot the most appropriate response to the current situation. This recommendation might be offensive or defensive or, in the best case scenario, a modification to the current offensive plan that would provide sufficient survivability in the face of the current threat environment. Since the pilot is always in charge of a fighter aircraft, the PA must function as an associate – an R2D2 back-seater – advising rather than dictating. Furthermore, the recommendations needed to be responsive to:
- changes in the current offensive or defensive situation;
- changes in the ability of the aircraft to complete the current plan due to failures, battle damage or the status of its consumable resources – fuel, weapons or ammunition; and
- changes to the pilot’s objectives.
Many architecture and implementation choices were driven by the need to have the system react correctly to change. Traditional script-based planning systems have been able to instantiate and parameterize a course of action described as a series of steps. However, they are usually unable to postpone, modify or abandon the remains of a script if circumstances indicate that something else should be done.
The recommendations of the Planners need to be presented to the pilot in the most readily understood style, and in a manner that was sensitive to the pilot’s work load. The requirements for the pilot to be in command, and for the system to operate in real time led to the need to tailor the response of the PA to the expectations of individual pilots. This would be accomplished long-term by setting bounds on the authority and behavior of the PA as an individual trains with the system in the simulator. Short-term, the pilot would be able to review and correct these settings at the mission planning station while the system loaded a data cartridge with the basic mission information before a flight.
This figure illustrates the integrated behavior the key system components: the Planners, the Intent Inference portion of the PVI, and the Plan-Goal Graph that unifies their interactions. For the associate to be really responsive to the pilot’s needs, the following general modes of behavior must be achieved.
The Situation Assessment and Systems Status subsystems reduce the large amount of external data to a much smaller number of significant events that are passed to the Planners.
The Planning subsystems would analyze the current situation and determine a course of action that would be consistent with mission goals and mission safety.
Proposals related to this course of action would be prepared for the User Interface section of the PVI.
When the urgency of this activity warrants presentation to the pilot, the User Interface describes the proposal using the presentation method that is most economical of the pilot’s cognitive load and of display real-estate.
If the proposed action is pre-approved by the pilot for the system to implement, the pilot may merely be informed
The Intent Inference System continues to capture pilot actions to determine whether the pilot intends to adopt any proposals not automatically implemented. These may take the form of switch or voice actions specifically addressing the proposal, or they may be the actions associated with implementing the proposal.
If these indications suggest that the pilot intends to implement the plans, the Planning subsystems continue to refine the plans to ensure they are still the most important things to be done, and that the aircraft is physically able to accomplish them.
If the indications are that the pilot wishes to do something else, the Planning subsystems should temporarily disable the original proposal and work to generate further plans to support the inferred intent of the pilot.
This intense communication between the various subsystems requires a common vocabulary expressed as a Plan-Goal Graph (PGG). This became the key ingredient in the system integration strategy.
A key ingredient to the success of the PA was the early discovery that system integration was not a problem, it was the problem. Consequently, we decided on a development approach that featured many short development and integration cycles. Integration of a system of this complexity occurred at two levels: syntactic and semantic. Syntactic integration involved the normal activities associated with defining message layout and content, and agreeing on the relationships between subsystems using these message definitions. Semantic integration involves agreeing upon the meaning of the data items in the messages. In order to support the semantic integration of the subsystems, two shared models are required:
- The World Model is a conventional representation of objects in the real work stores the current state of the pilot’s own aircraft and all of the friendly and hostile objects of interest.
- The Plan-Goal Graph (PGG) stores a representation of the objectives of the mission in a specific algebraic relationship.
While the emphasis in Phase 1 was on functionality without concern for real-time operation, it was recognized that unless the system operated close to real-time, it would be impossible to capture meaningful pilot useability comments about the system knowledge. Consequently, a number of system architecture decisions were made to position the system for full real-time operation:
The primary threat to performance was seen to be massive flows of data through computationally intensive core processes such as planning and the intent inferencing part of the PVI. The PA system was therefore designed to filter raw data from the outside and report only interesting events to the core systems. These reports were in the form of events. As the core processes defined and refined their model of the world (see PGG below), they posted monitors to the Situation Assessment (SA) and Systems Status (SS) modules requesting an event notification when a certain situation was detected. For example,
- any plan involving the launch of as AIM-120 missile would set monitors to report failure in the missile subsystems, or using missiles below a specified number;
- any mission route would post a monitor on fuel capacity;
- any threat evasion plan would post monitors for information suggesting that this aircraft had been detected by the threat we were trying to avoid.
The role of SA and SS was therefore to receive the masses of raw data from external and internal sources, build models of the situation inside and outside the aircraft and run the data through a semantic network. The nodes of the semantic networks became hosts for the dynamic lists of monitors that were defined and parameterized by the system requesting the monitor. When the monitor test succeeded, a specific event was reported back to that requesting subsystem.
This way, only important data was seen by the core processes, and they were able to remain close to real-time operation.
On average, the code at Demo 2 and Demo 3 ran about 6 times slower than real time. The system was connected to a flight simulator, and the pilot could fly for some short period of time. When the PA subsystems needed time to compute, the pilot was notified in the cockpit and the flight simulation froze.
The monitor / event philosophy was successfully carried into the Phase 2 architecture, and a further set of strategies implemented to achieve real-time performance:
- the systems were re-coded in C++ and hosted on a collection of processors in a VME chassis communicating via shared memory. [This was a major technical issue due to the character of the C++ object code and the need to share dynamic object between processors.]
- to achieve rapid response to the need for a mission route, a board was designed and built to implement the dynamic programming search for optimal paths through the threat environment. This board replaced the Sun Sparc implementation of Phase 1 and reduced the computation time from a small number of seconds to tens of milliseconds.
- to minimize the latency of delivering events to the right processor, a second board, the Global Event Facility (GEF), and the associate software was designed and built. Processes expecting an event would register for that event with the GEF. Event detectors would then report that event to the GEF that would then deliver the event to the requesting processor via hardware interrupts and minimal software overhead.
The Plan-Goal Graph (PGG), together with the associated dictionary providing supporting information, played a dominant role in the acquisition of operational knowledge, and in designing into the system the ability to react to change.
Interface control is normally accomplished by publishing and agreeing to the number and type of the parameters in each message exchanged between subsystems. Since plan names can be expressed efficiently as enumerated data types, when they were to be exchanged, it was originally believed that transmitting the name of a plan would be sufficient information. As the early prototypes of the PA were integrated, it became clear that this conventional approach to interface control was not going to work. Humans reading the name of a plan like “Fly_Planned_Route” see not just the words, but infer an enormous amount of additional information from that name:
this is part of the objective to reach the combat area of operations;
- it can be satisfied incrementally by reaching each waypoint in turn;
- this can be achieved either on autopilot or by flying manually.
While this is obvious to human readers, none of this and a myriad of other implied information is explicit in just the name. Left to themselves, the developers of the early prototypes placed their own interpretation on the meaning of these plan names and produced subsystems that did not communicate properly. A small group was assembled to resolve this problem, and the result was a detailed definition of the semantics of every possible plan and goal in the system.
The following figure illustrates the general form of a PGG.
A PGG is an acyclic graph (not a tree, since it is legal for a node to have multiple parent nodes). The uppermost layers are the overall goals of the system that typically remain in place throughout the mission. Most systems in which there is operator stress exhibit the characteristic that there are multiple top-level goals as shown above. Balancing these goals then becomes the responsibility of the pilot supported by the associate. A very specific algebra describes a well formed PGG:
Goals are states of the world the system should be attempting to achieve, illustrated as circles above.
Each goal can be achieved by any one of a number of child plans (a logical OR relationship). For example, the “Approach Target” goal above can be satisfied by any one of a number of trajectories.
Plans are courses of action to accomplish specific goals, illustrated as squares above.
Each plan can only succeed if all of its child goals are satisfied (a logical AND relationship). For example, the “Formation Engage Group” plan can only succeed when the target has been approached, the weapon selected, the target tracked accurately, and the weapon launched.
Upper level plans in the graph tend to be integrators attempting to satisfy their child goals.
Leaf level plans (those with no child goals) tend to be actions the pilot can readily accomplish like “turn to heading” or “press this switch.”
It should be noted that the PGG may be likened to a class diagram in that it identifies the character of all of the possible nodes. There were around 500 different node types in the last implementation of the system. Dynamically created instances of these node types were much more numerous. Some top-level nodes were created once at initialization. However, the majority nodes had an instance created for every interesting object in the simulation environment. It was not uncommon for the number of node instances being processed to exceed 5,000.
In addition to the logical relationship between individual plans and goals, a significant volume of information accompanies every node in a PGG. This information may be summarized as follows:
Parameters required to completely specify an instance. Each parameter was flagged as either fixed (instance defining) or updateable as new events are processed.
Ordinality: indicating whether this node is static – a permanent feature of the PGG space, or dynamic – instantiated when circumstances demanded. Also indicating whether only one instance exists, or whether multiple instances are possible.
Information Requirements: if an instance of this node is proposed or active, what must be shown to the pilot for him to understand the proposal or how to monitor its progress.
Workload Impact: if this node is active, what amount of the pilot’s cognitive workload is it attracting?
Monitors Required: A standard collection of monitors must be posted with the Situation Assessment and/or System Status modules to determine when execution should start, when it is finished, when the plan is failed, …
Connectivity: what parent and child nodes does this node relate to, and the list of other nodes with which this node is mutually exclusive.
Having the overall structure of the PGG as a guide, and knowing the information required for each node facilitated the kind of structured acquisition of operational knowledge that is vital to achieving operational utility. This structure also enabled us to plan the acquisition of knowledge in a specific area of the operational domain in order to focus a specific prototype integration on a specific operational scenario.
Testing the System
Testing the integrated PA systems as they evolved was the area that required an extremely creative set of strategies. In order to test a system designed for a complex combat environment, one has to design and then simulate to a sufficient level of fidelity a broad collection of combat scenarios. Two different sets of scenarios were necessary – one public set for use during development, and another hidden set to be seen b y the developers only when the formal system testing begins. This hidden set guards against developers producing “point solutions” that happen to work in one particular scenario, but are not supported by generalized knowledge and implementation.
Selecting suitably realistic, complex scenarios required careful planning. When you consider all of the possible scenarios, some present situations to the test aircraft that are so undemanding that an un-aided pilot can fly the mission comfortably. On the other hand, in some scenarios, the treats are so dense that the mission cannot be achieved even with perfect knowledge of the best plans. In between these extremes lies the set of scenarios that are difficult to fly without assistance, but not so difficult that an associate couldn’t improve pilot performance. But how were we to find these scenarios? The answer lay in the judicious use of an Air Force approved battle analysis tool called Tac Brawler. This program could take a collection of aircraft tracks and determine a number of statistics such as the detection and shot opportunities and the number of aircraft surviving. A thousands of randomly generated tracks were run through Tac Brawler and their scores tabulated. By simulating and flying tracks with certain scores, we could determine the thresholds for the too easy and too difficult scenarios. Any scenarios outside those thresholds were discarded, and the rest saved for test purposes.
A new simulation system was constructed using a VME chassis populated with processor cards similar to that hosting the PA systems. A fighter cockpit representative of a typical modern fighter was constructed with a state-of-the-art out-of-the-window visual system. Special arrangements were made for an audience of around 100 people to be able to observe the performance of the system using large repeater screens for the cockpit displays.
last updated 10/5/2002 by David Smith