TY - GEN
T1 - Domain conditioned adaptation network
AU - Li, Shuang
AU - Liu, Chi Harold
AU - Lin, Qiuxia
AU - Xie, Binhui
AU - Ding, Zhengming
AU - Huang, Gao
AU - Tang, Jian
N1 - Publisher Copyright:
© 2020, Association for the Advancement of Artificial Intelligence.
PY - 2020
Y1 - 2020
N2 - Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three crossdomain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.
AB - Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three crossdomain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.
UR - http://www.scopus.com/inward/record.url?scp=85106509324&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85106509324&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85106509324
T3 - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
SP - 11386
EP - 11393
BT - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
PB - AAAI Press
T2 - 34th AAAI Conference on Artificial Intelligence, AAAI 2020
Y2 - 7 February 2020 through 12 February 2020
ER -