Budget-Aware Adapters for Multi-Domain Learning

  title={Budget-Aware Adapters for Multi-Domain Learning},
  author={R. Berriel and St{\'e}phane Lathuili{\`e}re and Moin Nabi and T. Klein and Thiago Oliveira-Santos and N. Sebe and E. Ricci},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  • R. Berriel, Stéphane Lathuilière, +4 authors E. Ricci
  • Published 2019
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • Multi-Domain Learning (MDL) refers to the problem of learning a set of models derived from a common deep architecture, each one specialized to perform a task in a certain domain (e.g., photos, sketches, paintings. [...] Key Method To implement this idea we derive specialized deep models for each domain by adapting a pre-trained architecture but, differently from other methods, we propose a novel strategy to automatically adjust the computational complexity of the network.Expand Abstract

    Figures, Tables, and Topics from this paper.


    Publications referenced by this paper.
    Efficient Parametrization of Multi-domain Deep Neural Networks
    • 92
    • Highly Influential
    • PDF
    Learning multiple visual domains with residual adapters
    • 178
    • Highly Influential
    • PDF
    MobileNetV2: Inverted Residuals and Linear Bottlenecks
    • 2,551
    • PDF
    Incremental Learning Through Deep Adaptation
    • 61
    • Highly Influential
    • PDF
    Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
    • 11,441
    • PDF
    PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
    • 154
    • Highly Influential
    • PDF
    Learning without Forgetting
    • 742
    • PDF
    Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights
    • 96
    • Highly Influential
    • PDF
    Multi-scale Continuous CRFs as Sequential Deep Networks for Monocular Depth Estimation
    • 206
    • PDF