Browsing by Subject "Equivariance"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Equivariance in GAN Critics(2019-05) Upadhyay, YashEquivariance allows learning a representation that disentangles an entity or a feature from it's meta-properties. Spatially-equivariant representations lead to more detailed representations that can capture greater information from the image space in comparison to spatially-invariant representations. Convolutional Neural Networks, the current work-horses for image based analysis are built with baked-in spatial-invariance which helps in tasks like object detection. However, tasks like image synthesis that require learning an accurate manifold in order to generate visually accurate and diverse images would suffer due to the incorporated invariance. Equivariant architectures like Capsule Networks prove to be better critics for Generative Adversarial Networks as they learn disentangled representations of the meta-properties of the entities they represent. This helps the GANs to learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GAN variants that use CNNs. Apart from proposing architectures that incorporate Capsule Networks into GANs, the thesis also assesses the effects of varying amounts of invariance over the quality and diversity of the images generated.