Retour à la page d'accueil ARTIS FACTA     
Self adaptive interfaces and multi-modal human interaction for combat systems
    




Aller à : Conception centrée sur l'utilisateur

Aller à : Organisation par le travail

Aller à : Sureté assurée par l'homme


   Qui sommes nous ?
 
   Notre offre
 
   L'équipe
 
   Notre approche
 
   Les chantiers
 
   Nos clients
 
   Publications
(Retour à la liste)
 
   Bulletin d'infos
 





 

Par Pascale SOULARD
pascale.soulard@artis-facta.com

ARTIS FACTA - Ingénierie des Facteurs Humains
51, rue de l'Amiral Mouchez - 75013 PARIS
Tél : +33 1 43 13 32 33 - Fax : +33 1 43 13 32 39 *

published in Undersea Defence Technology, London.1992.



Abstract

As submarine warfare systems become more powerful and complex, their global efficiency relies more and more upon the quality of the man-system interaction schemes they use. To improve this quality, it is necessary to introduce new interaction media, but also to take into account constraints drawn from human factors in the earliest phases of system design. Such constraints are related to the nature of the media themselves, to the operational context and to the operator individual characteristics.

These principles are embedded within a multi-modal communication system designed to enable the interaction system to choose dynamically and in real time the best suited interaction modes. Keeping track of the interaction events, the system performs a continuous analysis of human activity in order to build and to update dynamically a model of the operator, allowing for user's preferences, habits and reasoning modes. This analysis is conducted on the basis of an exhaustive a priori model of the operator tasks.

The approach enabled us to validate the concept of self-adaptive interfaces, allowing a dynamic adaptation of the interaction logics and of the nature and the form of dialogue media to environmental variations and individual preferences, in order to optimize the global functioning of the system employed.




Introduction : the importance of man-machine communication

Paradoxically the sophistication of tools for interface designer (User Interface Management System, new interaction media as voice input or dataglove) does not increase necessarily the quality of the human computer interaction.

At the opposite, new and multiple interaction media may introduce a problem of choice to the operator; indeed the choice of the best suited interaction media depends on criteria the operator may not have the possibility or the time to evaluate, due to his own faculties and the operational context. Consequently, it is important to check that the operator may not become a restrictive element of the system.

To avoid these problems, man and machine have to be considered as a whole from the earliest phases of system design with the development of advanced multimodal and adaptive Man Machine Interfaces.

This paper presents the concept of advanced interfaces (multimodal and self adaptive) and how we have validated it for a submarine detection system.



The concept of advanced interfaces

Multi-modal user interfaces

A communication (within a multimedia system) is multi-modal if multiple modalities are available for the user, and an input (or output) expression is built up from multiple modalities. A modality may be the mouse, the keyboard, the screen, the tracking ball or more sophisticated modality such as speech recognition or synthesis, gesture recognition ...

Considering this diversification, the communication will gain on adaptability (different possible modalities for a same command: voice or menus) and the application will gain on efficiency (no eyes moving when using input voice in the case of overload situation).

Self Adaptive interfaces

Taking into account both physiological and cognitive human factors enables the system to propose dynamically a set of pertinent data according to the operational context and to the operator mental state. The goal is to facilitate and optimize his task especially in critical situations.

It is important to point out here the difference between self adaptive interfaces and adaptable interfaces.

Adaptable interfaces are defined during the design of the interface taking in consideration only predefined levels of competence.

At the opposite, a system with self adaptive interfaces adapts during run time the nature, the kind of communication devices and the logic of the interactions to the characteristics of the task and to the physiological and cognitive state of the operator.

An important point to insist on is that, in our ASW application, operators can have varying levels of skill, preferred strategies when performing a task with varying levels of demand. Indeed, during the exploitation of multi-sensor detection or decision making system the perform task varies according to the mission. For instance, during transit phase the demand level of the task is relatively low and the operator is in an undervigilance state. At the opposite during attack or tracking phase, stress is very important and the operator has to face an higher workload. Those remarks lead us to use self adaptive interfaces for such system.

Moreover the operator characteristics rely more on different individual behaviour (habits, preferences) than demand levels. All the operators are experts in their domain, but they use neither the same reasoning nor the same tools to perform a task.



SAITeR : an application of advanced interfaces for combat systems

SAITeR (Séquencement d'Activités Intelligent en Temps Réel, i.e. Intelligent Process Scheduling) is an application for submarine designed and developed in The Advanced Research Department of TS.ASM Arcueil. It automatically performs a complete scheduling of TMA (Target Motion Analysis) tasks , each task running a specific data processing algorithm whose triggering depends on operational and technical context evolution. The objective is to provide the position and kinematics of detected vessels as accurately as possible. In order to achieve this objective, SAITeR chooses the "well-matching" algorithms to trigger at the "well-matching" time with relevant information from the "well-matching" vessels (analysing the quality and the quantity of information).

A complex system

SAITeR consists of two parts:

  • An automatic part (A) performs triggering of algorithms depending on the operational context (ownship manoeuvres, detected vessel manoeuvres), the source of detection (mono or multi sensors detection, new contact or loss of detection) and the results of the last algorithms.

  • A manual part (M) enables the operator to trigger interactivaly particular algorithms on a small number of vessels in case of bad results from the automatic part (A) (either from the operator point of view or due to divergent algorithms).

Different elements such as the (A) part screen overload, the relevance of the information display on vessels (the information order, its accuracy), the succession of necessary tasks to run an algorithm in the ATF part (section of track selection, tacking into account past results), lead to evaluate the necessary part (impact) of man-system communication to optimize the global functioning of the man-machine system.

SAITeR and self adaptive interfaces

As noticed above, a dynamic adaptation of the interaction logics and of the information quality has to be allowed.

Some simples ideas enable to illustrate the concept of self adaptive interfaces of SAITeR:

  • when the operator looks for a same information on different vessels (such as the date of the last TMA result) and when this request needs many interactions (opening of a window related to a particular vessel, scrolling in the window to the right information ...), the system anticipates the operator request and automatically provides the information (or in a simplest way).

  • an analysis of the (A) part screen load (number of vessels and delay of presence) can justify a reduction of information to display under certain conditions: display of the most threatened vessels or of vessels that are used for processing by the (M) part for instance.

  • the system can notice some operator habits during the performing of particular tasks, such as a systematic call to the results of the last TMA before processing by (M) part. Hence these individual characteristics can be stored by the system in an operator model yielding a simplification of the task (the display of the result is automatic). Moreover the system reinitializes and updates the operator model performing a continuous analysis of human activity.

SAITeR and multimodal communication

The development of SAITeR naturally leads to analyse globaly the operator workstation. Even if not necessary, the diversification of interaction media can improve the conditions of work and the operator performances. Indeed, the results of a study done by an ergonom raise up constraints on the existing workstation:

  • the screen and button positions can disturb the operator when rising arms to press certain buttons (located on the sides of the workstation),

  • if the operator wishes to keep his right hand on the tracking ball when pressing certain buttons, he has to mask temporarily a part of the screen, this motion also involving a torsion of the back.

These disadvantages lead us to imagine a multimodal workstation composed of a touch entry screen (in place of some buttons), a voice input to keep eyes on the screen during some command, the speech synthesis under some conditions such as the use of headphone to reduce the ambient noise or the use of short messages.



The generic architecture

We have been working on advanced Human Interface and on Cognitive Science for two years in a THOMSON-CSF Strategic project called "Poste de Travail Intelligent" (Intelligent Workstation). In this project we have defined a generic architecture which enables to build multimodal and self adaptive interfaces for operational applications.

As we have seen above, taking into account such capabilities requires a design for the Man Machine Interface based on a modelling of operational task, a dynamic modelling of the operator dynamically taking into account the operational context and by a previous analysis of the activity from an ergonomic point of view. So we have embedded this approach within a multimodal communication system based on a distributed knowledge-base design and we have validated it with SAITeR, integrating a recognition/synthesis speech system in addition to the mouse and keyboard.

This generic architecture is composed of three main modules:

  • A Media Management Module: It is in charge of the formating of the events arriving from the different media or devices.

  • A Multimodal Request Understanding Module. It is in charge of the understanding of the multimodal request from the operator. Based on a linguistic and semantic analysis of the formatted events from the media manager, this module provides requests that are syntacticaly and semanticaly correct to the upper module.

  • A Dialog Understanding Module. This module aims at controling the dialog consistency i.e. when the operator makes a multimodal request:

    • at finding the performing/current task of the operator,

    • at proposing an adapted answer or feedback to the request,

    • at updating dynamically the task model, the operator model and the interactions history by analysing the interactions,

    • at managing the strategy of the system and at anticipating

      the further task.



Conclusion

These concepts have been validated on a specific application. Yet, many applications can be found in naval warfare, in which the concept of advanced interface improves the performance of the system.

To conclude we wish to emphasize the necessary cooperation between experts from different disciplines (advanced information processing, ergonomics, cognitive psychology and operational people) in order to design interactive systems where the solving of specific problems is deleguated to the expert from the concerned domain.



References


1 AMALBERTI, R. (1991). Modèles de raisonnement et ergonomie cognitive. Actes des Entretiens Science et Défense (pp. 317-328). Paris: Dunod.

2 BARTHET, M. F. (1988). Logiciels interactifs et ergonomie. Paris: Dunod.

3 COUTAZ, J. (1990b). Interfaces homme-ordinateur. Paris: Dunod.

4 HOLLNAGEL, E. (1988). Mental models and model mentality. In L. P. Goodstein, H. B. Andersen, & S. E. Olsen (Eds), Tasks, errors and mental models ( pp. 261-268). London: Taylor & Francis.

5 MANCINI, G. (1988). Modelling humans and machines. In L. P. Goodstein, H. B. Andersen, & S. E. Olsen (Eds), Tasks, errors and mental models (pp. 278-292). London: Taylor & Francis.

6 PINSKY, L. (1988). Cours d'action et activité. XXIV congrès de la SELF, Paris, pp. 29-45.

7 RASMUSSEN, J., 1986, A Cognitive Engineering Approach to the Modelling of Decision Making and its organization, Riso-M-2589 (Roskilde, Denmark: Riso National Laboratory).

8 RICHARD, J. F. (1990). Les activités mentales. Paris: Armand Colin.

9 SEBILLOTTE, S. (1991). Décrire des tâches selon les objectifs des opérateurs (Rapport technique ndeg. 125), INRIA: Rocquencourt.

10 WOODS, D. D. & ROTH, E. M. (1988). Aiding human performance II: from analysis to support systems. Le travail humain, 51, pp. 139-172.

Retour en
haut de page
 






ARTIS FACTA 51, rue de l'amiral Mouchez 75013 PARIS (France)
Nous contacter par mail Tél : 33 (1) 43 133 233 plan d'accès