Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence

Authors: Mengyao Lyu, Tianxiang Hao, Xinhao Xu, Hui Chen, Zijia Lin, Jungong Han, Guiguang Ding

Abstract: Domain Adaptation (DA) facilitates knowledge transfer from a source domain to
a related target domain. This paper investigates a practical DA paradigm,
namely Source data-Free Active Domain Adaptation (SFADA), where source data
becomes inaccessible during adaptation, and a minimum amount of annotation
budget is available in the target domain. Without referencing the source data,
new challenges emerge in identifying the most informative target samples for
labeling, establishing cross-domain alignment during adaptation, and ensuring
continuous performance improvements through the iterative query-and-adaptation
process. In response, we present learn from the learnt (LFTL), a novel paradigm
for SFADA to leverage the learnt knowledge from the source pretrained model and
actively iterated models without extra overhead. We propose Contrastive Active
Sampling to learn from the hypotheses of the preceding model, thereby querying
target samples that are both informative to the current model and persistently
challenging throughout active learning. During adaptation, we learn from
features of actively selected anchors obtained from previous intermediate
models, so that the Visual Persistence-guided Adaptation can facilitate feature
distribution alignment and active sample exploitation. Extensive experiments on
three widely-used benchmarks show that our LFTL achieves state-of-the-art
performance, superior computational efficiency and continuous improvements as
the annotation budget increases. Our code is available at
https://github.com/lyumengyao/lftl.

Source: http://arxiv.org/abs/2407.18899v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these