Skip to main content

Questions tagged [autoencoders]

Feedforward neural networks trained to reconstruct their own input. Usually one of the hidden layers is a "bottleneck", leading to encoder->decoder interpretation.

1 vote
0 answers
40 views

I was curious if anyone happens to know why data augmentations (like color jitter, random cropping, etc) appear to not be always used when training autoencoder or neural-based compressors for images ...
thisIsAUsername's user avatar
2 votes
0 answers
52 views

I am trying to use a convolutional autoencoder to perform dimensionality reduction with the ultimate goal of reconstructing temperature fields. As I understand it, the goal of an autoencoder is to ...
James Bosha's user avatar
0 votes
0 answers
39 views

I get that the decoder computes $p_\theta(x\mid z)$. Here $\theta$ is close to the true MLE which is intractable. But still $p_\theta(\cdot\mid z)$ is a probability distribution, and the decoder ...
Link's user avatar
  • 63
1 vote
1 answer
84 views

Are VAEs considered explainable AI? To me, they are because the latent variables are interpretable, e.g, you change one and you might see its effects on the head rotation (for a dataset of faces, for ...
Link's user avatar
  • 63
3 votes
1 answer
132 views

I have trouble understanding the minimization of the KL divergence. In this link https://www.ibm.com/think/topics/variational-autoencoder They say, "One obstacle to using KL divergence for ...
Link's user avatar
  • 63
5 votes
2 answers
329 views

Setup The variational autoencoder (VAE) loss is given by the following (see here, for example): $$L = - \sum_{j = 1}^J \frac{1}{2} \left(1 + \log (\sigma_i^2) - \sigma_i^2 - \mu_i^2 \right) - \frac{1}{...
Physics Enthusiast's user avatar
0 votes
0 answers
39 views

I am reading this paper https://openaccess.thecvf.com/content/CVPR2021/papers/Jaques_NewtonianVAE_Proportional_Control_and_Goal_Identification_From_Pixels_via_Physical_CVPR_2021_paper.pdf Are the ...
fdl's user avatar
  • 121
3 votes
1 answer
102 views

I am quite new to ML and I am developing my first Variational AutoEncoder (VAE), which is composed of a CNN encoder (4 layers) and a CNN decoder (4 layers). The input images are of size 128x128 and ...
lrod1994's user avatar
0 votes
0 answers
44 views

I have developed a VAE to understand if it is able to distinguish lung images of COVID-19, Normal images, or images with Viral Pneumonia. The VAE is composed of CNN encoder and CNN decoder (shown ...
lrod1994's user avatar
1 vote
1 answer
112 views

I am looking into the relationship between linear Variational Autoencoder (VAE) and probabilistic PCA (pPCA) presented by Lucas et al. (2019). Don't blame the elbo! paper In the official ...
user1571823's user avatar
1 vote
1 answer
85 views

As mentioned in the title, I understand the mathematical derivation of equations (6-7) in Kingma's original paper. \begin{equation} \log p_\theta(\mathbf{x}, y) \geq \mathbb{E}_{q_\phi(\mathbf{z} \mid ...
Wang Jing's user avatar
0 votes
0 answers
94 views

Say you have 4 stacked output vectors of 4 different VAEs: $B \times 512 \times 4$ These $512$ elements correspond to $256 \ \mu$ & $256 \ \ln\sigma^2$ (log-variances) of four multi-variate ...
Robbe's user avatar
  • 1
2 votes
2 answers
273 views

I’m learning different approaches to impute a tabular dataset of mixed continuous and categorical variables, and with data assumed to be missing completely at random. I converted the categorical data ...
hiu's user avatar
  • 77
0 votes
0 answers
9 views

I'm trying to train an LSTM Variational Autoencoder, but I cannot figure out why the model is not making any progress, the loss gets stuck immediately. Here is my code and training loop. The sequences ...
iTz_Lucky iTz_Lucky's user avatar
0 votes
0 answers
33 views

Let's imagine I have time series data for 50 users and 20 features per user: User1_ts(F1, ...F20), User2_ts(F1, ...F20), ...User50_ts(F1, ... F20). F20 is my target variable for estimation, and each ...
Carlo Allocca's user avatar

15 30 50 per page
1
2 3 4 5
44