Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  sparsity
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In various biomedical applications designed to compare two groups (e.g. patients and controls in matched case-control studies), it is often desirable to perform a dimensionality reduction in order to learn a classification rule over high-dimensional data. This paper considers a centroid-based classification method for paired data, which at the same time performs a supervised variable selection respecting the matched pairs design. We propose an algorithm for optimizing the centroid (prototype, template). A subsequent optimization of weights for the centroid ensures sparsity, robustness to outliers, and clear interpretation of the contribution of individual variables to the classification task. We apply the method to a simulated matched case-control study dataset, to a gene expression study of acute myocardial infarction, and to mouth localization in 2D facial images. The novel approach yields a comparable performance with standard classifiers and outperforms them if the data are contaminated by outliers; this robustness makes the method relevant for genomic, metabolomic or proteomic high-dimensional data (in matched case-control studies) or medical diagnostics based on images, as (excessive) noise and contamination are ubiquitous in biomedical measurements.
2
Content available remote Optimization of ℓp-regularized Linear Models via Coordinate Descent
EN
In this paper we demonstrate, how ℓp-regularized univariate quadratic loss function can be effectively optimized (for 0≤ p ≤ 1) without approximation of penalty term and provide analytical solution for p = 1/2 . Next we adapt this approach for important multivariate cases like linear and logistic regressions, using Coordinate Descent algorithm. At the end we compare sample complexity of ℓ1 with ℓp, 0 ≤ p < 1 regularized models for artificial and real datasets.
EN
In magnetic resonance imaging (MRI), k-space sampling, due to physical restrictions, is very time- -consuming. It cannot be much improved using classical Nyquist-based sampling theory. Recent developments utilize the fact that MR images are sparse in some representations (i.e. wavelet coeffi cients). This new theory, created by Candès and Romberg, called compressed sensing (CS), shows that images with sparse representations can be recovered from randomly undersampled k-space data, by using nonlinear reconstruction algorithms (i.e. l1-norm minimization). Throughout this paper, mathematical preliminaries of CS are outlined, in the form introduced by Candès. We describe the main conditions for measurement matrices and recovery algorithms and present a basic example, showing that while the method really works (reducing the time of MR examination), there are some major problems that need to be taken into consideration.
EN
We analyze representative ill-posed scenarios of tomographic PIV (particle image velocimetry) with a focus on conditions for unique volume reconstruction. Based on sparse random seedings of a region of interest with small particles, the corresponding systems of linear projection equations are probabilistically analyzed in order to determine: (i) the ability of unique reconstruction in terms of the imaging geometry and the critical sparsity parameter, and (ii) sharpness of the transition to non-unique reconstruction with ghost particles when choosing the sparsity parameter improperly. The sparsity parameter directly relates to the seeding density used for PIV in experimental fluids dynamics that is chosen empirically to date. Our results provide a basic mathematical characterization of the PIV volume reconstruction problem that is an essential prerequisite for any algorithm used to actually compute the reconstruction. Moreover, we connect the sparse volume function reconstruction problem from few tomographic projections to major developments in compressed sensing.
EN
To solve the underdetermined blind separation (UBSS) problem, Aissa-El-Bey et al. have proposed the significant subspace-based algorithms in the time-frequency (TF) domain, where a fixed (maximum) value of K, i.e., the number of active sources overlapping at any TF point, is considered for simplicity. In this paper, based on the principle component analysis (PCA) technology, we propose a modified algorithm by estimating the number K for selected frequency bins where most energy is concentrated. Improved performances are obtained without increasing complexity.
PL
Do rozwiązania problem nieokreślonej ślepej separacji (UBSS) Aissa-El_Bey zaproponował algorytm czasowo-częstotliwościowy gdzie ustalono liczbę aktywnych źródeł pokrywających każdy punkt TF. W artykule zaproponowano zmodyfikowany algorytm bazujący na analizie składowej głównej PCA. Otrzymano poprawę parametrów bez powiększania skomplikowania metody.
EN
In this article we study the regularization of optimization problems by Tikhonov regularization. The optimization problems are subject to pointwise inequality constraints in L²(Ω). We derive a-priori regularization error estimates if the regularization parameter as well as the noise level tend to zero. We rely on an assumption that is a combination of a source condition and of a structural assumption on the active sets. Moreover, we introduce a strategy to choose the regularization parameter in dependence of the noise level. We prove convergence of this parameter choice rule with optimal order.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.