Profile photo for Denis Mazur

Both GANs and Autoencoders are generative models, which means they learn a given data distribution rather than its density. The key difference is how they do it.

Autoencoders learn a given distribution comparing its input to its output, this is good for learning hidden representations of data, but is pretty bad for generating new data. Mainly because we learn an averaged representation of the data thus the output becomes pretty blurry.

Generative Adversarial Networks take an entirely different approach. They use another network (so-called Discriminator) to measure the distance between the generated and the real data. Basically what it does is distinguishing the real data from the generated. It receives some data as an input and returns a number between 0 and 1. 0 meaning the data is fake and 1 meaning it is real. The generators goal then is learning to convince the Discriminator into believing it is generating real data.

The main advantage of GANs over Autoencoders in generating data is that they can be conditioned by different inputs. For example, you can learn the mapping between two domains: satellite images to google maps

. Or you can teach the generator to reproduce several classes of data: generating the MNIST dataset .

Because of these differences, they can be used for different tasks. Autoencoders are more suitable for compressing data to lower dimensions or generating semantic vectors from it. Where GANs are more suitable for generating data.

Hope I answered your question, in case not, feel free to ask me one :)

Footnotes

View 3 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025