100% found this document useful (1 vote)
384 views2 pages

Stochastic Encoders

The document discusses the structure of the variational auto-encoder (VAE), highlighting the stochastic encoder and decoder's roles in mapping input data to a latent representation. It emphasizes the VAE's capability to learn low-dimensional embeddings of function classes through a combination of trainable feature mapping and generative modeling. Overall, the VAE is presented as a novel continuous stochastic process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
384 views2 pages

Stochastic Encoders

The document discusses the structure of the variational auto-encoder (VAE), highlighting the stochastic encoder and decoder's roles in mapping input data to a latent representation. It emphasizes the VAE's capability to learn low-dimensional embeddings of function classes through a combination of trainable feature mapping and generative modeling. Overall, the VAE is presented as a novel continuous stochastic process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

STOCHASTIC ENCODERS

Above is the structure of the variational auto-encoder (VAE). The stochastic encoder qf(zjxi),
Also called the inference model, learns stochastic mappings between an observed X-space (input
data) and a latent Z-space (hidden representation). The generative model pq(zjxi), a stochastic
decoder, reconstructs the data given the hidden representation.
VAE is a new continuous stochastic process. We use VAE to learn low dimensional
embeddings of function classes by combining a trainable feature mapping with generative model
using a VAE.

You might also like