Mode‑Aware Continual Learning for Conditional GANs

Cat P. Le · Juncheng Dong · Ahmed Aloui · Vahid Tarokh
arXiv:2305.11400 Machine Learning (cs.LG) Preprint • v3 • Sept 23, 2023

Abstract

We introduce a discriminator‑based Mode Affinity Score (dMAS) to quantify similarity between generative modes (tasks) in conditional GANs, and a mode‑aware continual learning framework that leverages relevant prior modes to quickly learn a new target mode while avoiding catastrophic forgetting. The framework first evaluates which learned modes are closest to the target via dMAS, then injects a new mode using a weighted label embedding derived from those closest modes, alongside memory replay. Extensive experiments on MNIST, CIFAR‑10, CIFAR‑100, and Oxford Flowers show robust dMAS behavior and competitive results versus baselines including sequential fine‑tuning, multi‑task learning, EWC‑GAN, Lifelong‑GAN, and CAM‑GAN.

TL;DR: This paper introduces a novel mode-aware continual learning method for Conditional Generative Adversarial Networks (cGANs). The key challenge addressed is how to learn new data distributions (modes) with limited samples while preserving previously learned ones—a common issue known as catastrophic forgetting.

Method in Brief

  1. Train cGAN on existing modes (tasks).
  2. Compute dMAS between each existing mode and the new target mode.
  3. Select top‑k closest modes and form the target’s weighted label embedding using the normalized dMAS weights.
  4. Fine‑tune cGAN on the target data while replaying samples from the relevant modes to preserve past performance.
Target label embedding (conceptual):
emb(ytarget) = Σi∈C ( si / Σj∈C sj ) · emb(yi)
Where si are dMAS scores from relevant modes i to the target.

Why dMAS?

Popular metrics like FID/IS compare image distributions and may not reflect the model state. In contrast, dMAS leverages the discriminator to compute distances in parameter space (via Fisher Information–based geometry), offering a stable similarity measure that is invariant to initialization and better aligned to continual generative learning dynamics.

Experiments

We validate on MNIST, CIFAR‑10, CIFAR‑100, and Oxford Flowers. dMAS consistently identifies semantically related modes (e.g., vehicles and animals on CIFAR‑100 ↔ CIFAR‑10), enabling efficient few‑shot transfer to the target mode. With memory replay, the model preserves existing modes while integrating the new one, achieving competitive performance against strong baselines.

Key Contributions

Citation

@article{le2023modeaware,
title={Mode-Aware Continual Learning for Conditional Generative Adversarial Networks},
author={Le, Cat P. and Dong, Juncheng and Aloui, Ahmed and Tarokh, Vahid},
journal={arXiv preprint arXiv:2305.11400},
year={2023}
}
        

Contact

Questions about this work? Reach out: calvine.le@gmail.com

More: Google Scholar