The identity of a language being spoken has been tackled over the years via statistical models on audio samples. A drawback of these approaches is the unavailability of phonetically transcribed data for all languages. This work proposes an approach based on image classification that utilized image representations of audio samples. Our model used Neural Networks and deep learning algorithms to analyse and classify three languages. The input to our network is a Spectrogram that was processed through the networks to extract local visual and temporal features for language prediction. From the model, we achieved 95.56 % accuracy on the test samples from the 3 languages.
Increasing proliferation of images due to multimedia capabilities of hand-held devices has resulted in loss of source information resulting from inherent mobility. These images are cumbersome to search out once stored away from their original source because they drop their descriptive data. This work, developed a model to encapsulate descriptive metadata into the Exif section of image header for effective retrieval and mobility. The resulting metadata used for retrieval purposes was mobile, searchable and non-obstructive.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.