Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 31

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  speech enhancement
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
EN
In this article a perceptually motivated multichannel speech enhancement system is presented. The proposed approach uses a generalized sidelobe canceler (GSC) method for speech dereverberation and noise suppression. The conventional GSC structure has been modified by introducing a weighting factor into the noise cancellation loop. It allows for a perceptually optimal shaping of the residual noise spectrum which results in speech distortion decrease. Acomparatwe evaluation of the selected methods has been performed using objective speech guality measures. Experimental results show that the proposed approach outperforms conventional ones providing better speech guality.
PL
Artykuł przedstawia motywowany percepcyjnie wielokanałowy system uzdatniania mowy. Proponowane podejście wykorzystuje uogólnioną metodę tłumienia listków bocznych (ang. Generalised Sidelobe Canceller) do usuwania pogłosu i szumu. Zmodyfikowano konwencjonalną strukturę algorytmu GSC poprzez wprowadzenie współczynnika wagowego w pętli usuwania szumu. Umożliwia to optymalne, w sensie percepcyjnym, kształtowanie widma szumu resztkowego, co skutkuje zmniejszeniem zniekształceń mowy. Przeprowadzono ocenę porównawczą wybranych metod z wykorzystaniem obiektywnych miar jakości mowy. Wyniki eksperymentów pokazują, że proponowane podejście przewyższa metody konwencjonalne, zapewniając lepszą jakość mowy.
EN
The subjective logatom articulation index of speech signals enhanced by means of various digital signal processing methods has been measured. To improve intelligibility, the convolutive blind source separation (BSS) algorithm by Parra and Spence [1] has been used in combination with classical denoising algorithms. The efficiency of these algorithms has been investigated for speech material recorded in two spatial configurations. It has been shown that the BSS algorithm can highly improve speech recognition. Moreover, a combination of the BSS with single-microphone denoising methods can additionally increase the logatom articulation index.
3
Content available remote Application of Variational Mode Decomposition on Speech Enhancement
100%
EN
Enhancement of speech signal and reduction of noise from speech is still a challenging task for researchers. Out of many methods signal decomposition method attracts a lot in recent years. Empirical Mode Decomposition (EMD) has been applied in many problems of decomposition. Recently Variational Mode Decomposition (VMD) is introduced as an alternative to it that can easily separate the signals of similar frequencies. This paper proposes the signal decomposition algorithm as VMD for denoising and enhancement of speech signal. VMD decomposes the recorded speech signal into several modes. Speech contaminated with different types of noise is adaptively decomposed into various components is said to be Intrinsic Mode Functions (IMFs) by shifting process as in Empirical Mode decomposition (EMD) method. Next to it the denoising technique is applied using VMD. Each of the decomposed modes is compact. The simulation result shows that the proposed method is well suited for the speech enhancement and removal of noise by restoring the original signal.
EN
Speech enhancement in strong noise condition is a challenging problem. Low-rank and sparse matrix decomposition (LSMD) theory has been applied to speech enhancement recently and good performance was obtained. Existing LSMD algorithms consider each frame as an individual observation. However, real-world speeches usually have a temporal structure, and their acoustic characteristics vary slowly as a function of time. In this paper, we propose a temporal continuity constrained low-rank sparse matrix decomposition (TCCLSMD) based speech enhancement method. In this method, speech separation is formulated as a TCCLSMD problem and temporal continuity constraints are imposed in the LSMD process. We develop an alternative optimisation algorithm for noisy spectrogram decomposition. By means of TCCLSMD, the recovery speech spectrogram is more consistent with the structure of the clean speech spectrogram, and it can lead to more stable and reasonable results than the existing LSMD algorithm. Experiments with various types of noises show the proposed algorithm can achieve a better performance than traditional speech enhancement algorithms, in terms of yielding less residual noise and lower speech distortion.
EN
Despite various speech enhancement techniques have been developed for different applications, existing methods are limited in noisy environments with high ambient noise levels. Speech presence probability (SPP) estimation is a speech enhancement technique to reduce speech distortions, especially in low signalto-noise ratios (SNRs) scenario. In this paper, we propose a new two-dimensional (2D) Teager-energyoperators (TEOs) improved SPP estimator for speech enhancement in time-frequency (T-F) domain. Wavelet packet transform (WPT) as a multiband decomposition technique is used to concentrate the energy distribution of speech components. A minimum mean-square error (MMSE) estimator is obtained based on the generalized gamma distribution speech model in WPT domain. In addition, the speech samples corrupted by environment and occupational noises (i.e., machine shop, factory and station) at different input SNRs are used to validate the proposed algorithm. Results suggest that the proposed method achieves a significant enhancement on perceptual quality, compared with four conventional speech enhancement algorithms (i.e., MMSE-84, MMSE-04, Wiener-96, and BTW).
EN
The paper presents the results of sentence and logatome speech intelligibility measured in rooms with induction loop for hearing aid users. Two rooms with different acoustic parameters were chosen. Twenty two subjects with mild, moderate and severe hearing impairment using hearing aids took part in the experiment. The intelligibility tests composed of sentences or logatomes were presented to the subjects at fixed measurement points of an enclosure. It was shown that a sentence test is more useful tool for speech intelligibility measurements in a room than logatome test. It was also shown that induction loop is very efficient system at improving speech intelligibility. Additionally, the questionnaire data showed that induction loop, apart from improving speech intelligibility, increased a subject’s general satisfaction with speech perception.
7
Content available Speech emotion recognition under white noise
100%
EN
Speaker‘s emotional states are recognized from speech signal with Additive white Gaussian noise (AWGN). The influence of white noise on a typical emotion recogniztion system is studied. The emotion classifier is implemented with Gaussian mixture model (GMM). A Chinese speech emotion database is used for training and testing, which includes nine emotion classes (e.g. happiness, sadness, anger, surprise, fear, anxiety, hesitation, confidence and neutral state). Two speech enhancement algorithms are introduced for improved emotion classification. In the experiments, the Gaussian mixture model is trained on the clean speech data, while tested under AWGN with various signal to noise ratios (SNRs). The emotion class model and the dimension space model are both adopted for the evaluation of the emotion recognition system. Regarding the emotion class model, the nine emotion classes are classified. Considering the dimension space model, the arousal dimension and the valence dimension are classified into positive regions or negative regions. The experimental results show that the speech enhancement algorithms constantly improve the performance of our emotion recognition system under various SNRs, and the positive emotions are more likely to be miss-classified as negative emotions under white noise environment.
EN
Nonnegative matrix factorization (NMF) is one of the most popular machine learning tools for speech enhancement (SE). However, there are two problems reducing the performance of the traditional NMF-based SE algorithms. One is related to the overlap-and-add operation used in the short time Fourier transform (STFT) based signal reconstruction, and the other is the Euclidean distance used commonly as an objective function; these methods can cause distortion in the SE process. In order to get over these shortcomings, we propose a novel SE joint framework which combines the discrete wavelet packet transform (DWPT) and the Itakura-Saito nonnegative matrix factorisation (ISNMF). In this approach, the speech signal was first split into a series of subband signals using the DWPT. Then, the ISNMF was used to enhance the speech for each subband signal. Finally, the inverse DWPT (IDWT) was utilised to reconstruct these enhanced speech subband signals. The experimental results show that the proposed joint framework effectively enhances the performance of speech enhancement and performs better in the unseen noise case compared to the traditional NMF methods.
EN
Speech enhancement is one of the many challenging tasks in signal processing, especially in the case of nonstationary speech-like noise. In this paper a new incoherent discriminative dictionary learning algorithm is proposed to model both speech and noise, where the cost function accounts for both “source confusion” and “source distortion” errors, with a regularization term that penalizes the coherence between speech and noise sub-dictionaries. At the enhancement stage, we use sparse coding on the learnt dictionary to find an estimate for both clean speech and noise amplitude spectrum. In the final phase, the Wiener filter is used to refine the clean speech estimate. Experiments on the Noizeus dataset, using two objective speech enhancement measures: frequency-weighted segmental SNR and Perceptual Evaluation of Speech Quality (PESQ) demonstrate that the proposed algorithm outperforms other speech enhancement methods tested.
EN
This paper proposes a speech enhancement method using the multi-scales and multi-thresholds of the auditory perception wavelet transform, which is suitable for a low SNR (signal to noise ratio) environment. This method achieves the goal of noise reduction according to the threshold processing of the human ear’s auditory masking effect on the auditory perception wavelet transform parameters of a speech signal. At the same time, in order to prevent high frequency loss during the process of noise suppression, we first make a voicing decision based on the speech signals. Afterwards, we process the unvoiced sound segment and the voiced sound segment according to the different thresholds and different judgments. Lastly, we perform objective and subjective tests on the enhanced speech. The results show that, compared to other spectral subtractions, our method keeps the components of unvoiced sound intact, while it suppresses the residual noise and the background noise. Thus, the enhanced speech has better clarity and intelligibility.
EN
Reverberation is a common problem for many speech technologies, such as automatic speech recogni- tion (ASR) systems. This paper investigates the novel combination of precedence, binaural and statistical independence cues for enhancing reverberant speech, prior to ASR, under these adverse acoustical con- ditions when two microphone signals are available. Results of the enhancement are evaluated in terms of relevant signal measures and accuracy for both English and Polish ASR tasks. These show inconsistencies between the signal and recognition measures, although in recognition the proposed method consistently outperforms all other combinations and the spectral-subtraction baseline.
EN
This paper presents an unconventional approach to perceptual sound processing, utilizing the Warped Discrete Fourier Transform. Unlike ordinary Discrete Fourier Transform, its novel mutation allows nonuniform sampling of the z-transform over the unit circle. Moreover, the warping can be adjusted to approximate nonlinear frequency resolution of human ear. Thus some aspects of the psy-choacoustic analysis and processing can be improved, what was verified in three practical applications. Firstly, the advanced speech enhancement system operating in the perceptually warped spectrum domain was configured. And recently the same idea was employed in speech and audio compression.
PL
Artykuł prezentuje niekonwencjonalne podejście do perceptualnego przetwarzania dźwięku oparte na Spaczonej Dyskretnej Transformacie Fouriera. W odróżnieniu od zwykłej Dyskretnej Transformaty Fouriera, jej nowa mutacja pozwala na nierównomierne próbkowanie transformaty z na okręgu jednostkowym. Co więcej, spaczenie może być dopasowane tak, by aproksymowało ono nieliniową rozdzielczość częstotliwościową ucha ludzkiego. Dzięki temu pewne aspekty analizy psychoakustycznej i przetwarzania mogą zostać poprawione, co zostało zweryfikowane w trzech praktycznych zastosowaniach. Najpierw zbudowano zaawansowany system uzdatniania mowy operujący w dziedzinie spaczonego widma. Ostatnio ideę wykorzystano także w kompresji mowy i audio.
EN
The most challenging in speech enhancement technique is tracking non-stationary noises for long speech segments and low Signal-to-Noise Ratio (SNR). Different speech enhancement techniques have been proposed but, those techniques were inaccurate in tracking highly non-stationary noises. As a result, Empirical Mode Decomposition and Hurst-based (EMDH) approach is proposed to enhance the signals corrupted by non-stationary acoustic noises. Hurst exponent statistics was adopted for identifying and selecting the set of Intrinsic Mode Functions (IMF) that are most affected by the noise components. Moreover, the speech signal was reconstructed by considering the least corrupted IMF. Though it increases SNR, the time and resource consumption were high. Also, it requires a significant improvement under nonstationary noise scenario. Hence, in this article, EMDH approach is enhanced by using Sliding Window (SW) technique. In this SWEMDH approach, the computation of EMD is performed based on the small and sliding window along with the time axis. The sliding window depends on the signal frequency band. The possible discontinuities in IMF between windows are prevented by the total number of modes and the number of sifting iterations that should be set a priori. For each module, the number of lifting iterations is determined by decomposition of many signal windows by standard algorithm and calculating the average number of sifting steps for each module. Based on this approach, the time complexity is reduced significantly with suitable quality of decomposition. Finally, the experimental results show the considerable improvements in speech enhancement under non-stationary noise environments.
EN
A novel speech enhancement method based on generalized sidelobe canceller (GSC) structure is presented. We show that it is possible to reduce audible speech distortions and preserve residual noise level under acoustic model uncertainties. It can be done by constraining a speech leakage power according to masking phenomena and conditional minimizing the residual noise power. We implemented the proposed approach using a simple delay-and-sum beamformer model. Finally a comparative evaluation of the selected methods is performed using objective speech quality measures. The results show that the novel method outperforms conventional one providing lower speech distortions.
PL
Prezentowana jest nowa metoda uzdatniania mowy w oparciu o strukturę uogólnionego tłumika listków bocznych. Wykazujemy, ze możliwe jest zmniejszenie słyszalnych zniekształceń mowy przy zachowaniu stałego poziomu szumu rezydualnego, dla modeli przybliżonych środowiska akustycznego. Może to być dokonane poprzez uwarunkowanie poziomu mocy przecieku mowy zgodnie ze zjawiskiem maskowania oraz minimalizację warunkową mocy szumu rezydualnego. Proponowane podejście zaimplementowano w oparciu o prosty model beamformera opóźniająco-sumującego. Ostatecznie przeprowadzono ocenę porównawczą wybranych metod z wykorzystaniem obiektywnych miar jakości mowy. Wyniki pokazują, że nowa metoda przewyższa konwencjonalną zapewniając mniejsze zniekształcenia mowy.
EN
Speech enhancement is fundamental for various real time speech applications and it is a challenging task in the case of a single channel because practically only one data channel is available. We have proposed a supervised single channel speech enhancement algorithm in this paper based on a deep neural network (DNN) and less aggressive Wiener filtering as additional DNN layer. During the training stage the network learns and predicts the magnitude spectrums of the clean and noise signals from input noisy speech acoustic features. Relative spectral transform-perceptual linear prediction (RASTA-PLP) is used in the proposed method to extract the acoustic features at the frame level. Autoregressive moving average (ARMA) filter is applied to smooth the temporal curves of extracted features. The trained network predicts the coefficients to construct a ratio mask based on mean square error (MSE) objective cost function. The less aggressive Wiener filter is placed as an additional layer on the top of a DNN to produce an enhanced magnitude spectrum. Finally, the noisy speech phase is used to reconstruct the enhanced speech. The experimental results demonstrate that the proposed DNN framework with less aggressive Wiener filtering outperforms the competing speech enhancement methods in terms of the speech quality and intelligibility.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.