In this paper authors propose a time optimization of fast normalized cross corrlation methods for image processing and optical character recognition with the use of parallel computing techniques realized on graphics processing units (GPU). It is shown that suitable modification of the wellknown formulas and their parallel implementation on graphics processing units may substantially accelerate computing time without any change of the quality of results. The performed research include comparative analysis of time efficiency of the developed methods with respect to their standard sequential implementations.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.