Mode‑Aware Continual Learning for Conditional GANs
Abstract
We introduce a discriminator‑based Mode Affinity Score (dMAS) to quantify similarity between generative modes (tasks) in conditional GANs, and a mode‑aware continual learning framework that leverages relevant prior modes to quickly learn a new target mode while avoiding catastrophic forgetting. The framework first evaluates which learned modes are closest to the target via dMAS, then injects a new mode using a weighted label embedding derived from those closest modes, alongside memory replay. Extensive experiments on MNIST, CIFAR‑10, CIFAR‑100, and Oxford Flowers show robust dMAS behavior and competitive results versus baselines including sequential fine‑tuning, multi‑task learning, EWC‑GAN, Lifelong‑GAN, and CAM‑GAN.
TL;DR: This paper introduces a novel mode-aware continual learning method for Conditional Generative Adversarial Networks (cGANs). The key challenge addressed is how to learn new data distributions (modes) with limited samples while preserving previously learned ones—a common issue known as catastrophic forgetting.
Method in Brief
- Train cGAN on existing modes (tasks).
- Compute dMAS between each existing mode and the new target mode.
- Select top‑k closest modes and form the target’s weighted label embedding using the normalized dMAS weights.
- Fine‑tune cGAN on the target data while replaying samples from the relevant modes to preserve past performance.
emb(ytarget) = Σi∈C ( si / Σj∈C sj ) · emb(yi)
si are dMAS scores from relevant modes i to the target.
Why dMAS?
Popular metrics like FID/IS compare image distributions and may not reflect the model state. In contrast, dMAS leverages the discriminator to compute distances in parameter space (via Fisher Information–based geometry), offering a stable similarity measure that is invariant to initialization and better aligned to continual generative learning dynamics.
The relationship between data classes from CIFAR-100 via the computed distances.
The lifelong learning performance over all learned tasks for target tasks from CIFAR‑100.
The overview of mode-aware continual learning framework for the conditional GAN.
Experiments
We validate on MNIST, CIFAR‑10, CIFAR‑100, and Oxford Flowers. dMAS consistently identifies semantically related modes (e.g., vehicles and animals on CIFAR‑100 ↔ CIFAR‑10), enabling efficient few‑shot transfer to the target mode. With memory replay, the model preserves existing modes while integrating the new one, achieving competitive performance against strong baselines.
Key Contributions
- Propose dMAS, a task/mode similarity measure that reflects the state of the GAN by comparing discriminator parameter geometry.
- Theoretical analysis (Theorem 1) proving properties and trade‑offs when injecting new modes into a trained model.
- A mode‑aware continual learning procedure for cGANs that adds a target mode using a linear combination of closest modes’ label embeddings and memory replay.
Citation
@article{le2023modeaware,
title={Mode-Aware Continual Learning for Conditional Generative Adversarial Networks},
author={Le, Cat P. and Dong, Juncheng and Aloui, Ahmed and Tarokh, Vahid},
journal={arXiv preprint arXiv:2305.11400},
year={2023}
}