Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
This paper investigates the relationship between various types of spectral clustering methods and their kinship to relaxed versions of graph cut methods. This predominantly analytical study exploits the closed (or nearly closed) form of eigenvalues and eigenvectors of unnormalized (combinatorial), normalized, and random walk Laplacians of multidimensional weighted and unweighted grids. We demonstrate that spectral methods can be compared to (normalized) graph cut clustering only if the cut is performed to minimize the sum of the weight square roots (and not the sum of weights) of the removed edges. We demonstrate also that the spectrogram of the regular grid graph can be derived from the composition of spectrograms of path graphs into which such a graph can be decomposed, only for combinatorial Laplacians. It is impossible to do so both for normalized and random-walk Laplacians. We investigate the in-the-limit behavior of combinatorial and normalized Laplacians demonstrating that the eigenvalues of both Laplacians converge to one another with an increase in the number of nodes while their eigenvectors do not. Lastly, we show that the distribution of eigenvalues is not uniform in the limit, violating a fundamental assumption of the compact spectral clustering method.
EN
This paper poses the question of whether or not the usage of the kernel trick is justified. We investigate it for the special case of its usage in the kernel k-means algorithm. Kernel-k-means is a clustering algorithm, allowing clustering data in a similar way to k-means when an embedding of data points into Euclidean space is not provided and instead a matrix of “distances” (dissimilarities) or similarities is available. The kernel trick allows us to by-pass the need of finding an embedding into Euclidean space. We show that the algorithm returns wrong results if the embedding actually does not exist. This means that the embedding must be found prior to the usage of the algorithm. If it is found, then the kernel trick is pointless. If it is not found, the distance matrix needs to be repaired. But the reparation methods require the construction of an embedding, which first makes the kernel trick pointless, because it is not needed, and second, the kernel-k-means may return different clusterings prior to repairing and after repairing so that the value of the clustering is questioned. In the paper, we identify a distance repairing method that produces the same clustering prior to its application and afterwards and does not need to be performed explicitly, so that the embedding does not need to be constructed explicitly. This renders the kernel trick applicable for kernel-k-means.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.