Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 27

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  human machine interface
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
1
EN
Controlling a remotely operated underwater vehicle (ROV) is an extremely challenging task that requires precise maneuvering and navigation in complex and often unpredictable environments. The operator faces numerous difficulties, including limited visibility and communication constraints, and the need to interpret data from various sensors. This paper describes a method for calibration of a wearable system equipped with inertial measurement unit (IMU) sensors that control the underwater manipulators. To implement a solution that allows the robot to be controlled by the operator's hand movements, it is necessary to measure the movement of the arm. This task is carried out using the IMU sensors, which are mounted in appropriate places on the ROV operator's suit to allow mapping the movement of his/her upper limbs. These movements are transferred to the manipulator's arms on the ROV, making it possible to interact with the environment by - manipulating objects under-water.
EN
Automation transparency is a means to provide understandability and predictability of autonomous systems by disclosing what the system is currently doing, why it is doing it, and what it will do next. To support human supervision of autonomous collision avoidance systems, insight into the system’s internal reasoning is an important prerequisite. However, there is limited knowledge regarding transparency in this domain and its relationship to human supervisory performance. Therefore, this paper aims to investigate how an information processing model and a cognitive task analysis could be used to drive the development of transparency concepts. Also, realistic traffic situations, reflecting the variation in collision type and context that can occur in real-life, were developed to empirically evaluate these concepts. Together, these activities provide the groundwork for exploring the relation between transparency and human performance variables in the autonomous maritime context.
EN
The paper's primary goal is to provide a simulation of an intelligent irrigation system that helps in saving money and water. The system is controlled using two Programmable Logic Controllers (PLCs) and a Human Machine Interface (HMI) which are connected via Ethernet, they are programmed to control the whole operation with the help of sensors and actuators without any interference of a human being, and the role of the operator is just to start or stop the system by switching it ON or OFF.
PL
Głównym celem artykułu jest przedstawienie symulacji inteligentnego systemu nawadniania, który pomaga oszczędzać pieniądze i wodę. Sterowanie systemem odbywa się za pomocą dwóch Programowalnych Sterowników Logicznych (PLC) oraz interfejsu człowiek-maszyna (HMI), które są połączone poprzez Ethernet, są zaprogramowane tak, aby sterować całą pracą za pomocą czujników i elementów wykonawczych bez ingerencji człowieka, oraz rolą operatora jest tylko uruchomienie lub zatrzymanie systemu poprzez włączenie lub wyłączenie go.
EN
The German-funded FernSAMS project aimed at the development of an unmanned, remote-controlled tug operation with AR/VR technology. After an extensive simulation test with ship-handling simulators, the developed FernSAMS AR/VR system has now been in-situ tested with a scale model of the tug. The model test results showed very robust stability in remote operations with improved situational awareness with the VR/AR system and sensors. After a short introduction of the FernSAMS concept as well as some first insights into FernSAMS Human-Machine-Interface tests within the simulator, this paper introduces the technical setup of the scale-model tests being conducted with the FernSAMS concept to test the operational and technical feasibility of AR/VR-based remote control. This includes an overview of the systems and sensors integration and an analysis of the effectiveness of AR/VR system combined with 360-degree video streaming.
5
EN
Modern ways of device development use the concept of a digital twin. A digital twin is an accurate digital copy of something that exists or is planned to be realized in the physical world. The digital twin is not only a virtual model of the physical system, but also a dynamic data and status information carrier obtained through a series of IoT-connected sensors that collect data from the physical world and send it to machines. The digital twin provides an overview of what is happening to the device in real time. This is very important in industry as this information is helpful to reduce maintenance issues and ensure production performance. This work focuses on the design and creation of a cybernetic physical system and its digital twin, based on CAD system modeling in conjunction with simulation and programming tools connected to real and simulated control systems. This process accelerates the development of the application implementation with the possibility to create a PLC control program and tune the system already in the design phase. Thus, the physical realization can be done in parallel with the programming and creation of the HMI interface. Modular programming will further accelerate software development [1]. The created system and its digital twin serve as a unified teaching tool without the need for real devices to be used by many students and users. This approach allows testing of program algorithms without the risk of damaging physical devices and is also suitable for distance learning.
EN
The maritime industry is striving towards increasing levels of autonomy within the field of navigation. However, fully autonomous vessel navigation requires an extraordinarily complex system. As a step towards full autonomy and to reduce system complexity, nautical officers should still be available on board to take over the watch from the autonomous system in situations, in which human intervention is required. Therefore, a highly advanced human-machine interface (HMI) is essential, which supports nautical officers in retrieving all necessary information in order to manage the takeover. The implementation of the autonomous system and introduction of an HMI creates new processes, which need to be defined. In this paper, we portray our approach to define the processes for watch handovers from the autonomous system to nautical officers by investigating current watch handover processes. Subsequently, the resulting process models are described and discussed.
EN
A robust myoelectric control system (MCS) is essential for the design of electromyography (EMG) based human–machine interface (HMI) designs such as prosthetics, exoskeleton, wheelchair and humanoid robots. The functionality of the current pattern recognition (PR) technique in MCS is limited by factors such as variation in the user’s limb position. To overcome the effect of this dynamic variation, an invariant higher order statistics – frequency domain feature set (HOS-FD) is proposed in this paper. EMG data from eight hand movements in five limb positions are considered. When trained with three limb position data, the HOS-FD with three hidden layers deep neural network (DNN) achieved a significantly high classification accuracy of 97.84%±0.22 compared to other classifiers viz., single layer artificial neural network (ANN), linear discriminant analysis (LDA), support vector machine (SVM), k nearest neighbor (kNN), decision tree (DT) and Naive Bayes (NB) classifiers with accuracies of 94.11%±1.63, 95.02%±1.89, 94.63%±2.33, 90.05 ± 4.11, 86.66 ± 4.72 and 78.78%±5.02 respectively. Further, when trained with data from all five limb positions, the proposed feature set with DNN had an accuracy of 99.16%±0.14. The statistical significance of the high classification accuracy obtained using the proposed feature set is also proven using multiple analysis of variance tests (p < 0.001). These results indicate that the proposed method is a promising technique for HMI.
8
EN
Detection of eye closing/opening from alpha-blocking in the EEG of occipital region has been used to build human-machine interfaces. This paper presents an alternative method for detection of eye closing/opening from EOG signals in an online setting. The accuracies for correct detection of eye closing and opening operations with the proposed techniques were found to be 95.6% and 91.9% respectively for 8 healthy subjects. These techniques were then combined with the detection of eye blinks, the accuracy of which turned out to be 96.9%. This was then used to build an interface for robotic arm control for a pick and place task. The same task was also carried out using a haptic device as a master. The speed and accuracy for these two methods were then compared to assess quantitatively the ease of using this interface. It appears that the proposed interface will be very useful for persons with neurodegenerative disorders who can perform eye closing/opening and eye blinks.
9
Content available remote Rozpoznawanie gestów statycznych ręki za pomocą rękawicy sensorowej
PL
W niniejszej pracy opisano rękawicę sensorową, umożliwiającą rozpoznawanie wybranych gestów ręki. W urządzeniu zastosowano 10 rezystancyjnych czujników ugięcia, zamocowanych za pomocą materiałowych kieszeni do rękawicy. Układ czujników tworzy dwa rzędy, jeden na wysokości stawów śródręczno-paliczkowych oraz drugi, na wysokości stawów międzypaliczkowych. Wykonany prototyp przetestowano na grupie trzech osób wykonujących 18 statycznych gestów polskiego języka migowego. Otrzymane dane sprawdzono pod kątem możliwości ich automatycznej klasyfikacji za pomocą trzech wybranych klasyfikatorów: k-najbliższych sąsiadów, naiwnego klasyfikatora bayesowskiego oraz drzewa decyzyjnego. Klasyfikacje przeprowadzano dla trójwymiarowego zbioru danych otrzymanych z czujników umiejscowionych nad stawami międzypaliczkowymi palców: środkowego, wskazującego i kciuka oraz dla dziesięciowymiarowego zbioru danych otrzymanych ze wszystkich czujników. Najlepsze wyniki klasyfikacji znaków dla obu zbiorów uzyskano za pomocą naiwnego klasyfikatora bayesowskiego, którego skuteczność wyniosła 66,66%.
EN
This work describes a sensor glove that allows real-time recognition of the position of the fingers. The device uses 10 resistive flex sensors fastened with material glove pockets. The sensor system consists of two rows, one at the height of the metacarpophalangeal joints and the second at the height of the interphalangeal joints. The prototype was tested on a group of three people performing 18 static gestures of Polish sign language. The data obtained was checked for the possibility of their automatic classification using three selected classifiers: k-nearest neighbours, naive Bayesian classifier and decision tree. Classifications were carried out for a simplified three-dimensional dataset obtained from sensors located above interphalangeal joints of the middle, index and thumb fingers and for the ten-dimensional dataset obtained from all sensors. The best character classification results for both sets were obtained for the naive Bayesian classifier, with a maximum recognition rate of 66.66%.
10
Content available remote An Electrooculography based Human Machine Interface for wheelchair control
EN
This paper presents a novel single channel Electrooculography (EOG) based efficient Human–Machine Interface (HMI) for helping the individuals suffering from severe paralysis or motor degenerative diseases to regain mobility. In this study, we propose a robust system that generates control command using only one type of asynchronous eye activity (voluntary eye blink) to navigate the wheelchair without a need of graphical user interface. This work demonstrates a simple but robust and effective multi-level threshold strategy to generate control commands from multiple features associated with the single, double and triple voluntary eye blinks to control predefined actions (forward, right turn, left turn and stop). Experimental trials were carried out on the able-bodied and disabled subjects to validate the universal applicability of the algorithms. It achieved an average command detection and execution accuracy of 93.89% with information transfer rate (ITR) of 62.64 (bits/ min) that shows the robust, sensitive and responsive features of the presented interface. In comparison with the established state of art similar HMI systems, our system achieved a better trade-off between higher accuracy and better ITR and while maintaining better performance in all qualitative and quantitative criteria. The results confirm that the proposed system offers a user-friendly, cost-effective and reliable alternative to the existing EOG-based HMI.
EN
This paper presents a designed computer system to control a prototype machine by the mobile application user. A central unit of the machine includes an embedded system with appropriate modules and it performs specific tasks. This solution can be applied in intelligent building system.
12
Content available remote Systemy Human Machine Interface (HMI) dedykowane samochodom poziomów L2/L3
PL
Analiza prowadzonych w wielu krajach badań wykazała istotny udział samochodów automatycznych w zwiększaniu indywidualnej mobilności różnych grup użytkowników, w tym również osób starszych i z ograniczoną sprawnością. W zależności od poziomu automatyzacji w różnym stopniu kształtują się zadania kierowcy. Wraz z rozwojem automatyzacji zmieniają się również rozwiązania konstrukcyjne interfejsu HMI. W jego budowie obecnie wykorzystywane są różnego rodzaju sygnały wejściowe, dzięki czemu w zależności od stopnia sprawności kierowcy można zastosować preferowane przez użytkowników urządzenia sterownicze. W referacie przedstawiono metodykę oceny urządzeń wykonawczych innowacyjnych interfejsów wraz przykładowymi wynikami badań dotyczących prototypu eco-kierownicy.
EN
Analysis of the studies from many countries showed significant role of automated cars in enhancing the individual mobility of the different users, including the elderly and the physically disabled. Depending on the level of the automation a division of tasks is different between the driver and the car. As vehicles become more and more automated over time, the opportunities for further innovation in in-vehicle HMI controls will follow. Presently, the constructions of HMI use various types of input signals, thereby depending on the driver efficiency it can apply the user-dedicated control devices. The paper presents the methodology of input device HMI’s evaluation and the example of the results of the multifunction steering wheel evaluation.
13
Content available Human machine interface for piezo control system
EN
The increasing complexity of scientific experiments puts a large number of requirements on the systems, which need to control and monitor the hardware components required for the experiments. This article discusses three different technologies of developing Human Machine Interface (HMÏ) for the Piezo Control System (PCS) developed at the Lodz University of Technology (TUL). The purpose of the PCS system is compensating the detuning of a superconducting accelerating structure caused by the Lorentz force. A set of full-custom HMI operator interface was needed to operate the prototype device. In order to select the technology, which would best suit the requirements; the interface was designed and developed in native Qt, Qt Quick and EPICS. The comparison was made and their advantages and drawbacks are presented.
PL
Artykuł przedstawia architekturę systemu informacji o priorytecie dla prowadzących pojazdy transportu zbiorowego, stworzoną w oparciu o metodologię FRAME. Podczas tworzenia architektury logicznej, wykazano, że dostępne elementy architektury FRAME nie obejmują koncepcyjnie realizacji zadań powierzonych projektowanemu systemowi. W celu utworzenia architektury zaproponowano dodanie nowych potrzeb użytkownika, funkcji i przepływów. Artykuł przedstawia wszystkie elementy zaprojektowanej architektury. Dodatkowo zostały w nim zaprezentowane sposoby realizacji architektury w formie architektury fizycznej i organizacyjnej.
EN
This article presents system architecture for providing information that may help public transport vehicle driver in decision making process, based on the European ITS Framework Architecture. In process of creating functional viewpoint it was proved that functions, user needs and data flows existing within the FRAME Architecture have not been sufficient for planned system and it was concluded that appropriate elements have to be created. Functional, physical and organizational viewpoints for this system have been designed and presented in this paper.
EN
Nowadays high energy physics scientists build and design systems that are complex in terms of the huge amount of subsystems and individual components. A single subsystem may consist of a few tens of thousands digital and analogue channels and sensors. As a result, the data rates captured in modern systems may result in gigabytes per second. Complex systems could generate various alarms and provide other diagnostic information. Consequently, a huge number of variables are needed to control and monitor the system. It could be a real challenge to provide access to all alarms and diagnostic information in systems composed of thousands of channels. In this sense, it is necessary to develop a methodology of designing Human Machine Interfaces (HMI) that will be simple to use and allow describing of relatively complex systems. This paper describes an HMI scheme able to obtain and present data from High Energy Physics systems. The purpose of this paper is to evaluate HMI panels dedicated for complex systems. The prototype HMI uses the demonstration PXIe-based Neutron Flux Monitor (NFM) developed by the Department of Microelectronics and Computer Science. This NFM is going to provide essential information for plasma operation in the ITER plant. The HMI involves a Graphical User Interface and an Alarm Management Scheme, all based on the Experimental Physics and Industrial Control System (EPICS) framework. The Graphical User Interface (GUI) includes the use of several tools provided by the Control System Studio as well as JavaScript, rules and actions to dynamically present data to the operators. In regards to alarm management, a scheme is proposed to efficiently handle alarms by presenting the relevant information and controls to quickly react to alarms.
PL
Obecnie systemy wykorzystywane w eksperymentach fizyki wielkich energii składają się ze znacznej liczby podsystemów i komponentów. Pojedynczy podsystem może obsługiwać nawet kilkadziesiąt tysięcy kanałów cyfrowych i analogowych oraz sensorów, w wyniku czego ilość danych zbieranych przez system jest liczona w gigabajtach na sekundę. Skomplikowane systemy mogą generować różne alarmy i dostarczać inne informacje diagnostyczne. W związku z tym, sterowanie i monitorowanie systemów wymaga wielu zmiennych. Umożliwienie dostępu do wszystkich alarmów i informacji diagnostycznych w systemach składających się z tysięcy kanałów stanowi poważne wyzwanie. Z tego względu niezbędne jest opracowanie metodologii projektowania paneli operatorskich HMI (Human Machine Interface) zbudowanych w prosty sposób, jednak pozwalających na obsługę stosunkowo złożonych systemów. Artykuł przedstawia system HMI dedykowany do zbierania i wizualizacji danych z systemów fizyki wielkich energii. Celem niniejszego artykułu jest ocena HMI dedykowanego dla skomplikowanych systemów. Opisywany prototypowy panel HMI wykorzystuje system monitorowania strumienia neutronów NFM (Neutron Flux Monitor), bazujący na standardzie PXIe, opracowany w Katedrze Mikroelektroniki i Technik Informatycznych. System NFM będzie dostarczał najważniejsze informacje dla użytkowania plazmy w ośrodku ITER. HMI składa się z graficznego interfejsu użytkownika GUI (Graphical User Interface) oraz Systemu Zarządzania Alarmami, zbudowanego w oparciu o platformę EPICS (Experimental Physics and Industrial Control System). Interfejs HMI korzysta z narzędzi dostarczanych przez CSS (Control System Studio) oraz języka JavaScript, reguł i akcji, aby na bieżąco prezentować dane operatorom. W kwestii obsługi alarmów, zaproponowano schemat prezentacji właściwych informacji i metod obsługi, pozwalający w wydajny sposób zarządzać alarmami.
EN
Contemporary battlefield is more and more demanding environment after all for soldiers but also for their equipment and machines that assist them. Looking at the last Lessons Learned experience we can conclude that the most dangerous threats met during last asymmetric warfare were Improvised Explosive Devices (IEDs). Therefore, the main focus is being done to equip UGVs with efficient engineering equipment that allows operating very quickly in very dangerous environment being exposed to health risks and loss of human life, within more and more sophisticated engineering support missions with special regard just to EOD/IED missions. The concept of such an engineering robot design is described in this paper. The EOD/IED mission support engineer robot, the attachments of the EOD/IED mission support engineer robot, removal of car bombs by means of the robot’s attachments, the EOD/IED mission support engineer robot’s ability to overcome terrain obstacles, the mission support engineer robot’s remote control panel are presented in the paper. Technological and Operational Problems Connected with UGV Application for Future Military Operations was held in Rzeszow, Poland on 20-22nd April 2015.
PL
Artykuł przedstawia przykłady wykorzystania systemów automatycznego rozpoznawania mowy do budowy głosowych interfejsów typu człowiek-maszyna. W artykule opisano sposób działania takich aplikacji pod kątem sterowania i komunikacji głosowej. W następnej części przedstawiono koncepcję i budowę systemu rozpoznawania mowy do komunikacji z 32-bitowym modułowym sterownikiem pralki.
EN
This paper presents examples of the use of automatic speech recognition systems to build human-machine voice interfaces. Also this paper briefly describes how these applications can work. The rest of the article shows the concept of usage speech recognition system based on own driver which cooperate with washing machine controller.
PL
W pracy przedstawiono wykorzystanie konsoli haptycznej Omega 7 do sterowania robotem KUKA LWR 4+ z zamontowanym uchwytem igły punkcyjnej. Manipulator KUKA zastosowano do usuwania usznopochodnego ropnia mózgu. W pierwszej części pracy przedstawiono prosty symulator, w którym do przeprowadzenia iniekcji w wirtualny ropień mózgu oraz pobrania płynu z jego wnętrza zastosowano konsolę haptyczną Omega 7. W dalszej części pracy przedstawiono stanowisko badawcze z robotem KUKA LWR 4+ i konsolą Omega 7 oraz pierwsze testy przeprowadzenia operacji z wykorzystaniem modelu czaszki.
EN
This paper presents the use of Omega 7 haptic console to control of KUKA LWR 4+ robot with integrated handle of puncture needles. KUKA manipulator was used to remove otogenic brain abscess. In the first part of the paper presents a simple simulator in which Omega 7 haptic consol was used to carry out injection into a virtual brain abscess and the withdrawal of fluid from inside. In the following paper presents a experimental set-up with the KUKA robot and haptic console and the first test of the operation using the model of skull.
EN
The paper concerns the research related to the improvement of control and supervision of web connected mobile robots using Physic Processing Unit (PPU). PPU computations taken into the consideration include rigid body dynamics, collision detection and raycasting. The result is improved by Human Machine Interface that allows performing semantic simulation during multi robot task execution. Semantic simulation engine provides tools to implement the mobile robot simulation, which is based on real data delivered by robot’s observations in INDOOR environment. The supervision of real objects such as robots is performed by association with its virtual representation in the simulation, therefore events such as object intersection, robot orientation - pitch and roll are able to be monitored. The simulation can be integrated with real part of the system with an assumption of robust localization of real entities, therefore Augmented Reality capabilities are available.
EN
Based on recent advances in non-linear analysis, the surface electromyography (sEMG) signal has been studied from the viewpoints of self-affinity and complexity. In this study, we examine usage of critical exponent analysis (CE) method, a fractal dimension (FD) estimator, to study properties of the sEMG signal and to deploy these properties to characterize different movements for gesture recognition. SEMG signals were recorded from thirty subjects with seven hand movements and eight muscle channels. Mean values and coefficient of variations of the CE from all experiments show that there are larger variations between hand movement types but there is small variation within the same type. It also shows that the CE feature related to the self-affine property for the sEMG signal extracted from different activities is in the range of 1.855∼2.754. These results have also been evaluated by analysis-of-variance (p-value). Results show that the CE feature is more suitable to use as a learning parameter for a classifier compared with other representative features including root mean square, median frequency and Higuchi's method. Most p-values of the CE feature were less than 0.0001. Thus the FD that is computed by the CE method can be applied to be used as a feature for a wide variety of sEMG applications.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.