Introduction
Generative adversarial networks (GANs) are the standard image and video generation models. Showing promising results in unconditional and conditional setups.
In recent years, deep learning has proven to be an extremely useful tool for discriminative tasks, through layers of linear transforms combined with nonlinearities. These systems learn to transform their input. Into an ideal representation across which we can draw clear decision boundaries.

We show positive results using the incorporated dependent data. To control particular attributes of faces sampled from the model deterministically. We also demonstrate that the model benefits from this external input data. Using such conditional data as “priors” on generation. The model can better navigate the space of possible outputs.
In this article, we will learn about the conditional generative adversarial network.
What are Conditional GANs?
Prior and concurrent works have conditioned GANs on discrete labels, text, and images. The image-conditional models have been tackled. Image prediction from a normal map future frame prediction. Product photo generation and image generation from sparse annotations.
Several other papers have also used GANs. For image-to-image mappings, only apply the GAN. Unconditionally, relying on different terms to force the output. To be conditioned on the input.
These papers have achieved impressive results. Like in inpainting, future state prediction, and image manipulation. Guided by user constraints, style transfer, and superresolution.
Each of the methods was tailored for a specific application. Our framework differs in that everything is application-specific. This makes our setup considerably more straightforward than most others.

Class-conditional GANs.
Different architectures and loss functions have been proposed. For conditioning GANs on class labels. The current state-of-the-art methods for class conditioning. Commonly employ cGAN with projection discriminator.
The generator applies each layer's conditional batch normalization with the class-specific scale and shift parameters.
Conversely, the discriminator is conditioned on the class labels. By computing the last feature layer's dot product. And the desired class's learnable embedding.
The performance of the conditional GANs was further improved by adding self-attention layers to the generator and the discriminator.
Inter-class Knowledge Transfer.
A lot of work has been emerging on extending models. Trained in prior examples/images to perform favorably on new data. And here is where knowledge transfer becomes essential.
For instance, memory and attention modules transfer. Labeled data knowledge to a recent class example.
Transfer Learning in GANs
Iterative image generation approaches, such as DGN-AM and PPGN. It could be considered early attempts at transfer learning. In image generation, by generating images via maximizing. The activation of the neurons of a pre-trained classifier.
TransferGAN is one of the earliest studies addressing transfer learning in GANs. The authors showed that they could outperform. Training from scratch regarding image quality. And convergence time by simply fine-tuning. A pre-trained network on the target dataset.
However, naive fine-tuning on small data still suffers from mode collapse and training instability. Another method proposes transferring the low-level layers of the generator. And the discriminator from the pre-trained network. In comparison, learning the high-level layers from scratch for the target data.
A recent study shows that simply freezing. The low-level filters of the discriminator are more effective than previous fine-tuning approaches.