The deep learning can be used to produce hierarchicalmodels which illustrates probability distributions over thetypes of data that we come across in artificial intelligenceapplications, such as robotics, speech recognition, andsymbols in natural language processing 1. Deep generativemodels have had a small effect, due to the difficulty ofapproximating many uncontrollable probabilistic calculationsarising in maximum likelihood estimation and associatedapproaches, and due to difficulty of having the advantagesof piecewise linear units in the generative conditions 2.Generative Adversarial Networks (GANs) can be used as inplace of maximum likelihood techniques.GANs have great capabilities and have been applied evenfor extracting features and for classifying different tasks 3.These tasks are generally performed by including featurematching technique for training the generator network andmultitask training of the discriminator network, which playsan extra role as a classifier too. GAN’s contain the generativemodel which is used for generating samples by addingrandom noise through a multilayer perceptron (class of feed-forward artificial neural network), and it also consists of thediscriminative model which is also a multilayer perceptron.The combined system is called the adversarial nets. Trainingis performed on both the models using backpropagation foroptimising the weights and also utilize the dropout algorithmsand sample from the generative model using forward prop-agation.
They do not require the approximate inference orMarkov chains as required by classical Boltzmann machines.GANs have their objective to achieve an equilibrium betweena generator and a discriminator; whereas VAEs have theirgoal to maximize a lower bound of the data log-likelihood.II.S TATE OF THE A RTDeep generative models, such as Deep Belief Networks(DBNs) and Deep Boltzmann Machines (DBMs), RestrictedBoltzmann Machines (RBM) used MCMC-based algorithmsfor training their networks 4, 5. In these approachesthe Markov Chain Monte Carlo (MCMC) methods computethe gradient of log-likelihood which adds on imprecisionas training progresses. This is because samples from theMarkov Chains are unable to mix between modes fastly.
Several generative models have been developed and trainedvia direct back-propagation and avoid the difficulties thatcome with MCMC training 6. Application of the structuralsimilarity index as an autoencoder (AE) reconstruction metricfor grey-scale images was applied in 7. Simple VAEs havebeen reformed to even importance weighted VAEs to obtaina more stringent lower bound 8. Several new forms ofGANs have been developed, even involving combination ofVAEs for improved forms and generations. The adversarialprinciple has found the application in generation setting andalso been applied to other factors such as domain adaptationand Bayesian inference which uses implicit variational dis-tributions in VAEs and encourage the adversarial method foroptimization 9.