Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  multimodal interface
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Convenient human-computer interaction is essential to carry out many exhausting and concentration-demanding activities. One of them is cyber-situational awareness as well as dynamic and static risk analysis. A specific design method for a multimodal human-computer interface (HCI) for cyber-security events visualisation control is presented. The main role of the interface is to support security analysts and network operators in their monitoring activities. The proposed method of designing HCIs is adapted from the methodology of robot control system design. Both kinds of systems act by acquiring information from the environment, and utilise it to drive the devices influencing the environment. In the case of robots the environment is purely physical, while in the case of HCIs it encompasses both the physical ambience and part of the cyber-space. The goal of the designed system is to efficiently support a human operator in the presentation of cyberspace events such as incidents or cyber-attacks. Especially manipulation of graphical information is necessary. As monitoring is a continuous and tiring activity, control of how the data is presented should be exerted in as natural and convenient way as possible. Hence two main visualisation control modalities have been assumed for testing: static and dynamic gesture commands and voice commands, treated as supplementary to the standard interaction. The presented multimodal interface is a component of the Operational Centre, which is a part of the National Cybersecurity Platform. Creation of the interface out of embodied agents proved to be very useful in the specification phase and facilitated the interface implementation.
EN
A new computer interface named Virtual-Touchpad (VTP) is presented. The Virtual-Touchpad provides a multimodal interface which enables controlling computer applications by hand gestures captured with a typical webcam. The video stream is processed in the software layer of the interface. Hitherto existing video-based interfaces analyzing frames of hand gestures are presented. Then, the hardware configuration and software features of the Virtual-Touchpad are described.
PL
W referacie przedstawiono interfejs multimodalny o nazwie Wirtualny Touchpad. Umożliwia on sterowanie aplikacjami komputerowymi za pomocą gestów dłoni, wyekstrahowanych z obrazów przechwytywanych w czasie rzeczywistym z kamery wizyjnej. Opisano konfigurację sprzętową oraz warstwę oprogramowania interfejsu. Warstwa oprogramowania przetwarza strumień wizyjny, dokonuje detekcji i klasyfikacji określonych gestów oraz interpretuje je w celu wykonania odpowiednich akcji.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.