Skip to main content
Bumped by Community user
Bumped by Community user
Radically changed the question content to align better with suggestions and make it more well-defined
Source Link
b0neval
  • 679
  • 4
  • 20

AIn general, a latent space is a compressed representationstructure of some dataset according toreduced dimensionality than that of the input space where points on this articlespace share resemblance the closer they are to each other.

ThroughThis article also refers to the uselayers of a convolutional and pooling layers,neural network as a CNNlatent space (see the diagram). Some CNNs essentially squashessquash an input image into a compressed representation too with appropriate use of convolutional and pooling layers. 

What I want to understand is: can we really look at thisthe CNN's layers as just anotherthe same kind of latent space representation as we do when we look at generative models or autoencodersdescribed in the former definition, i.e are the feature representations generated by these layers also like points on a latent space? I cannot seem to understand this.

If so, where can I find some good literature where a latent space representation is explained like this on a more general level rather than just taking it from a generative or autoencoder aspectfor CNNs?

A latent space is a compressed representation of some dataset according to this article.

Through the use of convolutional and pooling layers, a CNN essentially squashes an input image into a compressed representation too. What I want to understand is: can we look at this as just another latent space representation as we do when we look at generative models or autoencoders?

If so, where can I find some good literature where a latent space representation is explained on a more general level rather than just taking it from a generative or autoencoder aspect?

In general, a latent space is a structure of reduced dimensionality than that of the input space where points on this space share resemblance the closer they are to each other.

This article also refers to the layers of a convolutional neural network as a latent space (see the diagram). Some CNNs essentially squash an input image into a compressed representation too with appropriate use of convolutional and pooling layers. 

What I want to understand is: can we really look at the CNN's layers as the same kind of latent space representation as described in the former definition, i.e are the feature representations generated by these layers also like points on a latent space? I cannot seem to understand this.

If so, where can I find some good literature where a latent space representation is explained like this on a more general level for CNNs?

Source Link
b0neval
  • 679
  • 4
  • 20

Does a CNN always learn a latent space?

A latent space is a compressed representation of some dataset according to this article.

Through the use of convolutional and pooling layers, a CNN essentially squashes an input image into a compressed representation too. What I want to understand is: can we look at this as just another latent space representation as we do when we look at generative models or autoencoders?

If so, where can I find some good literature where a latent space representation is explained on a more general level rather than just taking it from a generative or autoencoder aspect?