Authors
F.Camastra, A.Casolaro and G.Iannuzzo
Journal
Neural Computing and Applications (Springer)
Abstract
The paper presents a novel manifold learning algorithm, the deep Gaussian process autoencoder (DPGA), based on deep Gaussian processes. Deep Gaussian process autoencoder algorithm has the following two main characteristics. The former is a bottleneck structure, borrowed by variational autoencoders and the latter is based on the so-called doubly stochastic variational inference for deep Gaussian processes architecture (DSVI). The main novelties of the paper consist in DGPA algorithm and the experimental protocol for evaluating it. In fact, to the best of our knowledge, deep Gaussian processes algorithms have not been applied to manifold learning, yet. Besides, an experimental protocol is introduced, the so-called manifold learning performance protocol (MLPP), to compare quantitatively the geometric preserved properties of manifold learning projections of the proposed deep Gaussian process autoencoder with the ones of state-of-the-art manifold learning algorithms. Extensive experimental tests on eleven synthetic and five real datasets show that deep Gaussian process autoencoder compares favorably with the other manifold learning competitors.
Description
The article “Manifold learning by a deep Gaussian process autoencoder” introduces a new manifold learning algorithm, the Deep Gaussian Process Autoencoder (DGPA), which leverages deep Gaussian processes within a bottleneck autoencoder architecture.
The method combines a variational autoencoder-style latent bottleneck with doubly stochastic variational inference to obtain flexible, probabilistic low-dimensional representations that preserve the geometry of high-dimensional data. A dedicated evaluation framework, called the Manifold Learning Performance Protocol (MLPP), is also proposed to quantitatively compare the geometric preservation achieved by DGPA against established manifold learning techniques.