Object-Centric Multi-Task Learning for Human Instances

Hyeongseok Son, Sangil Jung, Solae Lee, Seongeun Kim, Seung-In Park, ByungIn Yoo
Samsung Advanced Institute of Technology (SAIT)
BMVC 2023 (Oral)

Abstract

Human is one of the most essential classes in visual recognition tasks such as detection, segmentation, and pose estimation. Despite considerable efforts in addressing these tasks individually, their integration within a multi-task learning framework has been relatively unexplored. In this paper, we explore a compact multi-task network architecture that maximally shares the parameters of the multiple tasks via object-centric learning. To this end, we introduce a novel human-centric query (HCQ) that effectively encodes human instance information, including explicit structural information such as keypoints. Besides, we utilize HCQ in prediction heads of the target tasks directly and also interweave HCQ with the deformable attention in Transformer decoders to exploit a well-learned object-centric representation. Experimental results show that the proposed multi-task network achieves comparable accuracy to state-of-the-art task-specific models in human detection, segmentation, and pose estimation tasks, while it consumes less computational costs.

BibTeX


@InProceedings{Son2023HCQ,
    author    = {Hyeongseok Son and Sangil Jung and Solae Lee and Seongeun Kim and Seung-In Park and ByungIn Yoo},
    title     = {Object-Centric Multi-Task Learning for Human Instances},
    booktitle = {Proceedings of the British Machine Vision Conference (BMVC)},
    year      = {2023},
}