This article describes experiments which were conducted with two groups of people: healthy and with cerebral palsy. The purpose of the experiments was to verify two hypothesis: 1) that people with cerebral palsy can interact with computer using their natural gestures, 2) that games for tablets can be a research tool in experiments with young disabled people. Two games, which require the user to make simple gestures to accomplish given tasks, were designed and implemented on a tablet device with a built-in camera. Because of the camera limitations, the tracking process employs blue markers. By moving hand with a blue marker it was possible to perform navigation tasks in both games. In the first game the user had to gather, in 30 seconds, as many objects as possible. The objects were placed on the screen in a grid pattern. In the second game the user had to catch one object 20 times. The position of the object changed after each catch. Results obtained by healthy people were considered as a reference. However there are significant differences between measured parameters in both groups, all persons - healthy and with cerebral palsy - were able to accomplish the tasks by using simple gestures. Games for tablets turned out to be a very attractive research tool from the perspective of young, disabled users. They participated in measurement sessions much more willingly than in experiments without games.
In this paper, we performed recognition of isolated sign language gestures - obtained from Australian Sign Language Database (AUSLAN) – using statistics to reduce dimensionality and neural networks to recognize patterns. We designated a set of 70 signal features to represent each gesture as a feature vector instead of a time series, used principal component analysis (PCA) and independent component analysis (ICA) to reduce dimensionality and indicate the features most relevant for gesture detection. To classify the vectors a feedforward neural network was used. The resulting accuracy of detection ranged between 61 to 87%.
Author presents sign language features that can provide the basis of the sign language automatic recognition systems. Using parameters like position, velocity, angular orientation, fingers bending and the conventional or derivative dynamic time warping algorithms classification of 95 signs from the AUSLAN database was performed. Depending on the number of parameters used in classification different accuracy values were obtained (defined as the ratio of correctly recognized gestures to all gestures from test set), with the highest value 87.7% for the case of classification based on all the features and the derivative dynamic time warping method.
Rozpoznawanie gestów za pomocą czujników inercyjnych może być alternatywą dla standardowych interfejsów człowiek-komputer. Do śledzenia gestów wykorzystano czujnik zawierający trójosiowy akcelerometr, magnetometr i żyroskop. W dotychczasowych badaniach bazowano na sygnałach przyspieszenia. Autorzy zaproponowali i porównali rozwiązania wykorzystujące zarówno analizę przyspieszenia, jak i orientacji w przestrzeni, a także umożliwili badanym osobom wykonywanie gestów w sposób naturalny. Wyniki pokazują, że za pomocą algorytmu DTW (Dynamic Time Warping) możliwa jest klasyfikacja indywidualna dla danej osoby (ze skutecznością 92%), a także klasyfikacja uogólniona - na podstawie uniwersalnego wzorca (ze skutecznością 83%).
EN
Gesture recognition may be applied to control of computer applica-tions and electronic devices as an alternative to standard human-machine interfaces. This paper reports a method of gesture classification based on analysis of data from 9DOF inertial sensor - NEC-TOKIN, Motion Sensor MDP-A3U9S (Fig.1). Nine volunteers were asked to perform 10 different gestures (shown in Fig.2) in a natural way with a sensor attached to their hand. The gesture data base consisting of 2160 files with triaxial acceleration and orientation signals was created. In the first step the data were divided into training and testing sets. The designed system uses the Dynamic Time Warping (DTW) algorithm to calculate similarity of signals (formulas (1)-(3)). Using this method the authors chose representative signals to indi-vidual and generalized exemplars data base from the training set. The DTW algorithm was also used in the classification process. Different recognition approaches were tested basing on acceleration-only, orientation-only and acceleration-orientation signals. The results listed in Tab.4 show that the best recognition efficiency of 92% was obtained in the individual recognition (only one person gestures taken into account) for modified exemplars data base. The modification proposed by the authors (Section 3) improved the recognition rate by 10 percentage points. The efficiency rate of 83% (Tab. 5) was reached in the generalized case. The next step of im-proving the designed recognition system is application of an inertial system with a bluetooth module and real-time gesture classification.
During public presentations or interviews, speakers commonly and unconsciously abuse interjections or filled pauses that interfere with speech fluency and negatively affect listeners impression and speech perception. Types of disfluencies and methods of detection are reviewed. Authors carried out a survey which results indicated the most adverse elements for audience. The article presents an approach to automatic detection of the most common type of disfluencies - filled pauses. A base of patterns of filled pauses (prolongated I, prolongated e, mm, Im, xmm, using SAMPA notation) was collected from 72 minutes of recordings of public presentations and interviews of six speakers (3 male, 3 female). Statistical analysis of length and frequency of occurrence of such interjections in recordings are presented. Then, each pattern from training set was described with mean values of first and second formants (F1 and F2). Detection was performed on test set of recordings by recognizing the phonemes using the two formants with efficiency of recognition about 68%. The results of research on disfluencies in speech detection may be applied in a system that analyzes speech and provides feedback of imperfections that occurred during speech in order to help in oratorical skills training. A conceptual prototype of such an application is proposed. Moreover, a base of patterns of most common disfluencies can be used in speech recognition systems to avoid interjections during speech-to-text transcription.
Przedmiotem pracy jest system badania i analizy chodu. Pomiary przyspieszeń wykonywane są w różnych punktach anatomicznych z wykorzystaniem trójosiowych czujników przyspieszenia, transmisja danych odbywa się w standardzie Bluetooth, a do akwizycji wykorzystano przenośny komputer. Oprogramowanie służące do zapisu, przetwarzania i analizy danych zostało napisane w środowisku LabVIEW. Badania chodu przeprowadzono na 17 ochotnikach. Uzyskano parametry związane ze zdarzeniami w cyklu chodu, a także wysokie wartości czułości (91-94%) i specyficzności (88-89%) detekcji zdarzeń oraz satysfakcjonującą wartość parametru %R&R (16%).
EN
Gait analysis provides useful information about spatio-temporal parameters [10, 11], stability and balance [7], progression of the diseases (Parkinson, Huntington) [7, 8], results of rehabilitation [6] or shock attenuation [7, 9]. The paper describes the accelerometer-based system designed for motion and gait examination. The system consists of two measurement modules with triaxial ADXL accelerometers (Tab. 1), portable computer and software implemented in LabVIEW environment. The system features data transmission via a Bluetooth network and during examination the data is received on a portable computer, visualized on a graph and written in a text file (Fig. 2). The text file has a special header which contains information about the examined person, anatomical axes and a name of the place of module attachment. The all information is introduced by a user at the beginning of the examination. After signal processing, several parameters are calculated: mean duration of the gait cycle, mean duration of swing and stance phases in percentage of the gait cycle, acceleration range (Fig. 1, Tab. 2) [15]. Detection of gait cycle events (heel strike, toe off) is based on the analysis of local extremes of the parameter RSS (Formula 1) [16]. For every anatomical point there is also visualized a graph with accelerations for the whole mean gait cycle (Fig. 4). At the end of data analysis, an examination report as a Microsoft Word document file is prepared. System tests were performed on 17 volunteers (Fig. 3) who underwent gait examination. Depending on the goal, only one or both modules were used. There were different places of module attachment: ankles, knees, hips, sacrum, neck and head. High values of sensitivity (91-94%) and specificity (88-89%) of event detection as well as satisfactory value of %R&R parameter (16%) were obtained.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.