| Publications Page |

The Super Cockpit and Human Factors Challenges

Thomas A. Furness, III


ABSTRACT

A revolutionary virtual crew station concept titled the "Super Cockpit" is introduced with its applications and operational advantages. Unique aspects of the virtual information portrayal and interactive medium of the super cockpit are discussed leading to a need for new areas of human factors research and engineering.

INTRODUCTION

During the past year the Air Force Systems Command has been conducting an intensive study to identify future technologies and systems to meet Air Force operational needs over the next 20 to 30 years. One of these technologies has been identified as the Super cockpit. The impetus for developing an entirely new and revolutionary crew station concept is based upon the burgeoning complexity of current and future systems and the lack of good interfaces that make optimum use of the spatial and psychomotor capabilities of the human. The Super Cockpit was envisioned to be a generic crew station which would exploit the natural perceptual, cognitive and psychomotor capabilities of the operator. It is to be based upon several technologies which allow virtual visual, auditory, and tactile worlds to be created for the operator along with an interactive control medium which uses eye, head and hand positions and speech as control Inputs.

THE PROBLEM

Existing cockpits constrain the transfer of information from the machine to the human. Panel-mounted displays and limited field-of- view head-up displays act as two-dimensional "peep holes" into the three-dimensional world in which the pilot must operate. In order to understand his world, the pilot must search several displays in the cockpit, assimilate data from each highly coded presentation, then piece together these data to build a picture in his mind of the overall 3-0 combat situation. This process taxes the visual and information processing abilities of the pilot because of the diversity, location and coding of the information. The problem Is further exacerbated by "low bandwidth" controls (e.g., multifunction keyboard, joystick). These control interfaces are considered to be low bandwidth because they require lengthy training for their use.

The human has a remarkable capacity for handling spatial information as long as this information is portrayed In a way that takes advantage of the human's natural perceptual mechanisms. The super cockpit makes use of the 3-0 information processing abilities of the operator by projecting information directly into the eyes using miniature components incorporated into the pilot's headgear. This approach allows a total spherical world to be generated so that information is conveyed in spatially relevant positions. A 3-D auditory display also conveys to the ears sound that contains directional information. Other devices incorporated into the headgear measure the instantaneous position of the pilot's head and eyes, and together with voice commands, facilitate the aiming of weapons and selection of switches projected in virtual space. Since both the audio and visual interfaces use information projected into virtual space, the need for a myriad of instruments and controls in the cockpit is eliminated. Intelligent machine aids infer the Intent of the pilot and provide assistance in real time by screening and filtering information for the display and automatically configuring the virtual space of the super cockpit for enhancing mission performance.

Figure 1. Block diagram of the Super Cockpit

SUPER COCKPIT MECHANIZATION

Figure 1 shows the functional components of a conceptual super cockpit which interface the pilot's head, eyes, hands, ears and voice to the aircraft. As in conventional cockpits, information giving the status, location, orientation of the ownship aircraft and other aircraft is interfaced through a digital bus which couples the avionics, flight control, communications, weapon and crew subsystems. The virtual world generator decodes the information from the main digital bus and generates the instructions for the overall visual and auditory presentations to the pilot. Based upon the ownship vector state, position, sensor imagery and other information derived from the digital bus, the virtual world generator extracts selected data from a stored terrain data base, data link and information portrayal library to synthesize a virtual world which surrounds the pilot.

Virtual Display Presentations

Visual graphics and sensor imagery are combined for input to the binocular display electronics which provide the final amplification and video drive signals for two high-resolution miniature image sources (e.g., cathode-ray-tubes (CRTs)) which are integrated into the pilot's headgear. The image generated on each CRT is relayed to the eyes of the pilot via an infinity optical system within the visor of the helmet. The pilot then perceives a magnified, wide field- of-view representation of the original scene which appears in three dimensions and at optical infinity. It is envisioned that the information will be represented in a pictorial form which "mimes" the world (or is easy to understand) thereby conveying instantly to the pilot the status of his aircraft and targets and threats in the outside world. With the exception of panel-mounted standby instruments, all of the cockpit instruments displays and switch panels are also represented in virtual space. During daylight conditions, the displayed scene appears superimposed and as a part of the real world. During night and weather conditions (or if the pilot is encapsulated), the display appears as a "substitute" for the real world.

The information which is to be conveyed by audio to the pilot is relayed to the speech synthesizer/3D sound generator, which generates the proper voice and 3u sound signals which are displayed to the pilot's ears by high fidelity binaural earphones. Many status enunciations will be made by a synthesized voice having a directional location which connotes to the pilot the nature of the input. For example, verbal instructions from an electronic co-pilot will appear to originate from behind the pilot's head. Other directional cues, indicating the directions of radiating threats will appear to originate in the direction and at a range equivalent to that threat. These signals will be directionally accurate and stabilized in space regardless of the pilot's head position.

Image/Information Stabilization

The visual and aural images on the displays are stabilized in space by feeding back to the virtual world generator the instantaneous direction of the helmet and optical system. This is accomplished by using a magnetic helmet tracking system which measures the position and orientation of the helmet (and consequently the optics) within the cockpit with six degrees-of-freedom (i.e., azimuth, elevation, roll in orientation, and x, y, z in position). As the pilot moves his head, thereby continuously repositioning the optical projection in space, the virtual world generator compensates to portray the proper scene. Using this approach, the individual graphics or symbols presented on the display may be stabilized one of four ways in virtual space: 1) head stabilized: for head aiming weapons or selecting functions from cockpit stabilized switches; 2) cockpit stabilized: virtual rendering of cockpit instruments or switching panels, weapon stores, etc.; 3) earth stabilized. features which are superimposed and stabilized relative to earth coordinates such as navigation waypoints, target locations, etc.; 4) space stabilized: other aircraft, in-flight missiles or compass heading/attitude information.

Control Modalities

In order to allow the pilot to interact with the virtual world and to command aircraft functions, several interface modalities are built into the system. In addition to measuring head position and orientation, the tracking system also measures the instantaneous position and orientation of the pilot's hands within the cockpit, while another transducer tracks the instantaneous line-of-sight of the eye. A voice actuated control system monitors the speech of the pilot and allows specific utterances (which are enabled by a separate switch) to activate specific functions or to enable specific head, hand or eye-directed commands within the virtual cockpit. Using this combination of components the following control interfaces can be made relative to the virtual scenes presented to the eyes and ears: 1) head-aimed control: pilot positions head- stabilized reticle over virtual switch box (which is cockpit-stabilized) and presses an enabling. switch to activate function; 2) voice- actuated control: pilot speaks specific control command into the microphone to activate system function, or alternatively, pilot places head-stabilized reticle over cockpit stabilized virtual switch or air/surface target and gives a specific selection from a menu of verbal enabling commands (e.g., select, mark, zoom, lock-on, etc.); 3) touch sensitive panel: pilot places finger on a physical touch panel area (with superimposed cockpit-stabilized visual information from the infinity display) to call up other cockpit switching functions. Depending upon selection, touch panel regions are redefined for different additional panels which may be called up and windowed within the display; 4) virtual hand controller: in this mode the pilot moves his hand in three dimensions. The magnetic tracker senses hand position and orientation with six degrees-of- freedom. When the hand is put into a pre- determined volume or region within the cockpit, a three-dimensional virtual control panel is windowed into the visual display. The pilot then activates functions or makes vernier adjustments by moving his hand or placing a finger over the virtual switch. Auditory, visual and/or tactile feedback is given to indicate completed control action; 5) eye control system: when eye position Is summed with head position within the super cockpit electronics, the eye fixation angles relative to the virtual display and the world are known. With eye control it Is now possible for the pilot to look at virtual or real switches to activate them or to designate targets rapidly or aim weapons at large off-boresight angles in the outside world.

Electronic Associate

In addition to interfacing with the system avionics, the virtual world generator also interfaces with a special purpose processor termed a pilot intent inference engine. This is an expert system which serves as an electronic associate or "watchdog" for the pilot. The main purpose of this component is to screen, filter and control the flow of information to the pilot based upon its interpretation of the pilot's need for information during various mission phases. The associate will also aid the pilot in making decisions by presenting alternative approaches and effects of these decisions. The associate will Interface with the pilot through the virtual world generator so that visual and auditory information can be presented in spatially relevant directions.

SUPER COCKPIT EXAMPLE

A pictorial representation of a possible "super cockpit" scene is shown in Figure 2. Depicted here is the visual scene which a pilot might view when flying at a low altitude and at night. The scene is projected to the pilot in three dimensions and overlays the real world with a one-to-one spatial registration. (Under daylight conditions the scene would be transparent and only a subset portions of the symbols would be superimposed over the outside world.) The instantaneous size and location of this "virtual window" into a sensor and computer-generated world is equivalent to the field-of-view of the binocular optics described above and may be as great as 140 degrees horizontally by 60 degrees vertically. When the instantaneous display scene is continuously updated by head/helmet movement, the entire 4 steradian field-of-regard is available to the operator.

In this virtual cockpit scene, data from a digital bus (i.e., sensors, threat warning system, terrain nap, weapon delivery, etc.) are fused, organized and presented so that the spatial meaning of these data are conveyed instantly. Critical information is no longer compressed into small two-dimensional representations, but now surrounds the pilot and is seen in three dimensions relevant to the location of the information source. For example, the artificial horizon normally presented in the attitude direction indicator (or in a small field-of-view HUD) now appears as a panoramic horizon surrounding the pilot and overlayed on the real world. The heading or compass rose information now appears impressed over the horizon instead of in a horizontal situation indication in the cockpit. Targets, navigation waypoints, threats etc., now appear where they really are in space, rather than in a small two- dimensional presentation in the cockpit. Essential aircraft state information is windowed into the virtual world when needed. An electronically insetted "rear view mirror" also conveys to the pilot visually what may be behind him.

The visual display is augmented with both auditory and tactile displays. The auditory display gives the pilot a three-dimensional sound which provides localization cues to the directions of different targets, aircraft, threats, etc. Warning signals are perceived as coming from a particular point in the cockpit. A synthesized speaker can even whisper in his ear information which cannot be ignored.

The pilot interacts with the display spatially by pointing his head/helmet, eyes or hand/finger-mounted sensors at objects in the display and giving verbal commands. Functions can also be activated by merely looking at a displayed switch and saying "select," or ''on, or "off," or "go there," or "stop here," etc. Hand orientation once placed in predetermined locations within the cockpit, is sensed and used to command system functions. Tactile displays give the pilot "touch" feedback that a virtual switch has indeed been pushed.

The pilot can also interact "visually" with crewmembers in the same or other aircraft by designating with eye position air and ground targets and having those appear in the same 30 spatial location for a pilot in another aircraft. The pilot's perspective can be changed by calling up a "gods eye" view of the tactical situation. In this case, the pilot's ownship location and status relative to other ground and airborne targets are presented in a miniature three-dimensional world in the cockpit. From this view the pilot can rapidly assess his mission options.

Figure2. Representative Super Cockpit display scene

ADVANTAGES OF THE SUPER COCKPIT

There are many advantages of the super cockpit as contrasted with conventional crew stations. Indeed the display medium truly offers a port for the avionics to communicate a spatial awareness to the pilot in all directions and in three dimensions. The pilot is able to interact using natural psychomotor skills, easily providing directional commands to aircraft subsystems. The super cockpit is therefore easier to operate since the display/control interfaces are now more intuitive. Since the configuration of the cockpit is governed mainly by software rather than hardware restrictions, the cockpit can be dynamically reconfigured either for a different mission, due to pilot preference, or as upgrades to the avionics are installed. Another advantage of the virtual cockpit is that the pilot can be fully encapsulated for protection in a CBR environment or reclined to reduce the canopy profile, decrease radar crossection and improve aerodynamics and maneuvering limits. Likewise, the pilot need not be present in the actual vehicle which he is piloting since with the appropriate data links, a "remote" super cockpit would provide the visual and aural "telepresence" cues as if he were located in the vehicle. Finally, the super cockpit is practically weightless, occupies little cockpit space and is low cost in comparison to the cost of multiple displays in conventional cockpits.

CHALLENGES TO THE HUMAN FACTORS COMMUNITY

As described above, the super cockpit provides entirely new mechanisms for visual, aural and tactile interfaces to the pilot within the military environment. For these new interfaces to be effective, a systematic development of the hardware and software must be based upon sound human factors design principles. Some design criteria and directions can be derived from interpretation or extrapolations from previous research (e.g. wide field-of-view training simulator development, etc.); however, most will require a new generation of human factors research to acquire the empirical data upon which new virtual interface models and metrics can be built. As stimulation for these future research efforts, the following questions are presented.

Virtual World Context-Oriented Questions

What should be the instantaneous size of the window into the world provided by the super cockpit display?

How should information be rendered in the "focal" versus "ambient" visual areas of the display; e.g. how should a flow field be created which conveys attitude, altitude and velocity without direct visual fixation?

What degree of spatial stabilization/update rate is needed for the visual presentation?

What registration accuracy is needed for virtual image projections over the outside world? What problems will occur when there is conflict between vestibular cues, movements of display visual presentations, localized audio presentations, and the outside world?

Since information can be presented to each eye separately, what problems will occur when there is a difference between vergence and accommodative cues (i.e., the display is collimated to optical infinity while retinal disparity is used to present information at different apparent distances)?

What procedures should be used to align the display world to the outside world and to adjust the binocular alignment of the two display channels?

What procedures should be used to align the audio display to the visual display?

Virtual World Content-Oriented Questions

How should portrayal modalities be used together (i.e. visual, auditory and tactile)? How should information be portrayed to represent a "gestalt" of the battle situation which can be instantly recognized and reacted to?

Which control modalities should be used with which display modalities?

When should information be head-stabilized versus cockpit-stabilized?

How should the virtual portrayal medium be used to direct the pilot's attention to special items of information?

How should virtual communication take place between operators within the same aircraft or between different aircraft?

How should the dialogue between the electronic associate and the pilot be structured within the virtual portrayal medium?

Virtual World Quality Assessment

How should the "goodness" of the virtual display designs. be measured, including the accompanying perceptual, cognitive and motor control resources which are expended to accomplish specific tasks?

How can the level of situation awareness conveyed to the pilot be assessed?

Is there an index of cognitive complexity that can be used as a design parameter within the virtual worlds?

How can the likelihood of a correct decision being made by the pilot be predicted or assessed given different virtual portrayal and control interfaces?

CONCLUSIONS

The super cockpit represents a radical departure from conventional crew station designs but one which may be necessary in order to exploit the spatial capabilities of the human in the weapon system. The intent is to provide interfaces which are intuitive and easy to use and that take advantage of our natural perceptual mechanisms in the design of a virtual 3-D world of sights and sounds.

The questions raised above are not exhaustive but they are representative of the issues which will confront the developers of the super cockpit and Its related virtual technologies. The challenge to the human factors community Is to draw upon previous experiences and build a new science which will provide the theoretical and empirical framework upon which these new technologies can be developed and 'exploited.


Human Interface Technology Lab