Doctoral thesis
OA Policy
English

Finding signals in the void: Improving deep latent variable generative models via supervisory signals present within data

Number of pages161
Imprimatur date2022-03-07
Defense date2021-09-15
Abstract

Supervisory signals are all around us, be it from distinguishing objects under differing lighting conditions, to predicting future states of kinematic systems, or even synchronizing video and audio modalities. This thesis focuses on improving deep latent variable generative models by leveraging supervisory signals already present within data. We explore three modalities with rich supervisory information: sequential learning, sequential factorization and episodic learning. We explore the first modality through lifelong learning, where multiple consecutive tasks are observed in a sequential manner and where knowledge gained from previous tasks is retained and used to aid future learning. This thesis develops a novel framework that recasts latent variable generative models in a lifelong learning setting, where distributions are now observed in a sequential manner, and where knowledge of each distribution needs to be retained throughout the lifetime of the learner. Supervised knowledge is instilled into this framework through the use of generative replay, coupled with a Bayesian posterior regularizer. These rich signals, gathered as a result of previous learning aid in mitigating the negative effects of catastrophic forgetting faced by lifelong learning models. We then turn our attention to the problem of hybrid classification, where we develop a novel algorithm that decomposes the posterior of a proxy distribution into a set of factored distributions, where samples from each posterior factor corresponds to high probability co-ordinates in a large dimensional input distribution. The proposed method, "Variational Saccading", maximizes a novel variational lower bound on the conditional classification likelihood and enables neural classifiers to operate over previously intractable large dimensional input spaces. Finally, we take an empirical Bayes approach to episodic learning, where we leverage a set of IID inputs to learn a more informed prior in the form of an episodic memory. We extend the memory model of the Kanerva machine to a novel differentiable block allocated memory, "Kanerva++" (K++). K++ presents state of the art performance on numerous memory conditional image generation tasks and demonstrates that relational information between an episode of samples can provide models with rich supervisory information in the form of a learnable prior.

Keywords
  • Generative modeling
  • Variational inference
  • Differentiable memory
  • Latent variable
  • Vae
Research groups
Citation (ISO format)
RAMAPURAM, Jason Emmanuel. Finding signals in the void: Improving deep latent variable generative models via supervisory signals present within data. Doctoral Thesis, 2022. doi: 10.13097/archive-ouverte/unige:160342
Main files (1)
Thesis
Identifiers
335views
70downloads

Technical informations

Creation04/04/2022 16:50:00
First validation04/04/2022 16:50:00
Update time19/03/2024 16:10:26
Status update19/03/2024 16:10:26
Last indexation01/11/2024 02:28:42
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack