Exploring the Variational Autoencoder (VAE) family
In this repository, we provide a minimal implementation of the VAE family, including VAE, CVAE, and VQVAE. These implementations are applied to both Anime-Face and Cartoon-Face datasets. Let's embark on this journey from zero to hero! 🌟
The Variational Autoencoder (VAE) is a generative model that learns a probabilistic mapping from a data space to a latent space. Below is an image generated using VAE:
The Conditional Variational Autoencoder (CVAE) extends VAE by conditioning the generative process on additional information. Here’s an image generated according to each label:
The Vector Quantized Variational Autoencoder (VQVAE) introduces discrete latent variables to the VAE model, enabling more efficient and powerful representations. Below is an image reconstructed using VQVAE:
Feel free to reach out if you have any questions or suggestions:
- Email: [email protected]
- GitHub: shining0611armor
Happy Learning! 😊