Multi-task networks rely on effective parameter sharing to achieve robust generalization across tasks. In this paper, we present a novel parameter-sharing method for multi-task learning that conditions parameter sharing on both the task and the intermediate feature representations at inference time. In contrast to traditional parameter sharing approaches, which fix or learn a deterministic sharing pattern during training and apply the same pattern to all examples during inference, we propose to dynamically decide which parts of the network to activate based on both the task and the input instance. Our approach learns a hierarchical gating policy consisting of a task-specific policy for coarse layer selection and gating units for individual input instances, which work together to determine the execution path at inference time. Experiments on the NYU v2, Cityscapes and MIMIC-III datasets demonstrate the potential of the proposed approach and its applicability across problem domains.
Bibtex
@misc{rahimian2023dynashare,
title={DynaShare: Task and Instance Conditioned Parameter Sharing for Multi-Task Learning},
author={Elahe Rahimian and Golara Javadi and Frederick Tung and Gabriel Oliveira},
year={2023},
eprint={2305.17305},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Related Research
-
CVPR 2023 Recommended Reading List
CVPR 2023 Recommended Reading List
*R. Aoki, *R. Deng, *M. Zhai, J. He, *H. Zhao, *E. Smith, *V. Bhaskara, and *H. Sharifi.
Research
-
Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate
Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate
H. Zhao.
Research