AIn general, a latent space is a compressed representationstructure of some dataset according toreduced dimensionality than that of the input space where points on this articlespace share resemblance the closer they are to each other.
ThroughThis article also refers to the uselayers of a convolutional and pooling layers,neural network as a CNNlatent space (see the diagram). Some CNNs essentially squashessquash an input image into a compressed representation too with appropriate use of convolutional and pooling layers.
What I want to understand is: can we really look at thisthe CNN's layers as just anotherthe same kind of latent space representation as we do when we look at generative models or autoencodersdescribed in the former definition, i.e are the feature representations generated by these layers also like points on a latent space? I cannot seem to understand this.
If so, where can I find some good literature where a latent space representation is explained like this on a more general level rather than just taking it from a generative or autoencoder aspectfor CNNs?