: Algorithms for independent component analysis (ICA) based on information-theoretic criteria optimization over differential manifolds have been devised over the last few years. The principles informing their design lead to various classes of learning rules, including the fixed-point and the geodesic-based ones. Such learning algorithms mainly differ by the way in which single learning steps are effected in the neural system's parameter space, i. e. by the action that a connection variable is moved by in the parameter space toward the optimal connection pattern. In the present paper, we introduce a new class of learning algorithms by recalling from the literature on differential geometry the concept of mapping onto manifolds, which provides a general way of acting upon a neural system's connection variable in order to optimize the learning criteria. The numerical behavior of the introduced learning algorithms is illustrated and compared with experiments carried out on mixtures of statistically-independent signals.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.