Gaussian Process Latent Variable Models for Dimensionality Reduction
A comparision on data sparsification methods when applied to Gaussian Process Latent Variable Models for Dimensionality Reduction
We have implemented a Gaussian Process Latent Variable Model (GP-LVM), as described by Lawrence in (Lawrence, 2005) and used it for dimensionality reduction on a standard dataset. We use three different methods (IVM, k-means and random sampling) for sparsification to speed up the training of the GP-LVM. To test these we used the same data set used in (Lawrence, 2005) for simpler comparison. The results showed that there was not much difference of GP-LVM in comparison to PCA and the addition of sparsification worsened the dimensionality reduction. See the full report below to read more about the project.The source code can be found on Github here.
References
Lawrence, N. (2005). Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of machine learning research, 6(Nov), 1783-1816.