Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  multilayer neural network
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In this paper, a multilayer feedforward neural network (MLFFNN) is proposed for solving the problem of the forward and inverse kinematics of a robotic manipulator. For the forward kinematics solution, two cases are presented. The first case is that one MLFFNN is designed and trained to find solely the position of the robot end-effector. In the second case, another MLFFNN is designed and trained to find both the position and the orientation of the robot end-effector. Both MLFFNNs are designed considering the joints’ positions as the inputs. For the inverse kinematics solution, a MLFFNN is designed and trained to find the joints’ positions considering the position and the orientation of the robot end-effector as the inputs. For training any of the proposed MLFFNNs, data is generated in MATLAB using two different cases. The first case is that data is generated assuming an incremental motion of the robot’s joints, whereas the second case is that data is obtained with a real robot considering a sinusoidal joint motion. The MLFFNN training is executed using the Levenberg-Marquardt algorithm. This method is designed to be used and generalized to any DOF manipulator, particularly more complex robots such as 6-DOF and 7-DOF robots. However, for simplicity, this is applied in this paper using a 2-DOF planar robot. The results show that the approximation error between the desired output and the estimated one by the MLFFNN is very low and it is approximately equal to zero. In other words, the MLFFNN is efficient enough to solve the problem of the forward and inverse kinematics, regardless of the joint motion type.
EN
Tongue machine interface (TMI) is a tongue-operated assistive technology enabling people with severe disabilities to control their environments using their tongue motion. In many disorders such as amyotrophic lateral sclerosis or stroke, people can communicate with the external world in a limited degree. However, they may be disabled, while their mind is still intact. Various tongue–machine interface techniques has been developed to support these people by providing additional communication pathway. In this study, we aimed to develop a tongue–machine interface approach by investigating pattern of glossokinetic potential (GKP) signals using neural networks via simple right/left tongue touchings to the buccal walls for 1-D control and communication, named as GKP-based TMI. As can be known in the literature, the tongue is connected to the brain via hypoglossal cranial nerve. Therefore, it generally escapes from the severe damages, in spinal cord injuries and was slowly affected than limbs of persons suffering from many neuromuscular degenerative disorders. In this work, 8 male and 2 female naive healthy subjects, aged 22 to 34 years, participated. Multilayer neural network and probabilistic neural network were employed as classification algorithms with root-mean-square and power spectral density feature extraction operations. Then the greatest success rate achieved was 97.25%. This study may serve disabled people to control assistive devices in natural, unobtrusive, speedy and reliable manner. Moreover, it is expected that GKP-based TMI could be a collaboration channel for traditional electroencephalography (EEG)-based brain computer interfaces which have significant inadequacies arisen from the EEG signals.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.