The training of a deep learning classifier often requires a huge number of labeled data. However, from a practical perspective, acquiring data labels is usually expensive, time-consuming, or even impractical in many applications. Recently, unsupervised multi-source domain adaptation (UMDA) has emerged as a promising approach to predicting the labels of unlabeled data in target domain by transferring the knowledge of multiple related source domain, which is achieved by minimizing the discrepancy between the multi-source domain and target domain. In general, the success of almost all existing UMDA methods relies on two assumptions: (1) The source domain information (i.e. domain label) from which each input instance comes is known in advance; and (2) both the multi-source and target domains share the same label space and their learning tasks have no conflicting objectives, which, however, may not always be true from a practical perspective. For example, there are several types of image search results for “dog”, such as close- up photos of dogs’ faces, photos of dogs in natural surroundings, and drawings of dogs. In this scenario, domain labels have to be attached manually to use the existing UMDA methods, but this process may be costly and time-consuming. Moreover, following the characteristics of UMDA, the label space of the target domain would be a subset of the label space of the multi-source domain, resulting in a class mismatch problem that could potentially cause a negative transfer (i.e. leveraging source domain knowledge undesirably reduces the learning performance of a classifier in the target domain). In addition, the optimization objectives of the multi-source domain would have some constraints that are in conflict with those in the objective of the target domain, thereby impractically finding a single solution that is optimal for each of all objectives. Under these circumstances, it is crucial to combat the tricky domain discrepancy to increase the positive knowledge transfer (i.e. improvement of current knowledge through the gain of additional information), meanwhile reducing negative transfer. This project will therefore focus on studying UMDA by addressing the following three key questions: (1) Without the availability of domain labels in multi-source domain, how can make the UMDA effective? (2) With different label spaces in the source and target domains, can the UMDA learn a robust distribution alignment across domains to increase positive transfer, while also reducing negative transfer? (3) Existing UMDA methods often minimize the weighted linear combination of source task classification losses and distribution alignment losses between the source and target domains. Unfortunately, such a weighted linear combination cannot address the problem of conflicting objectives. Instead, is it possible to make a multi-objective optimization method to model the trade-off between the conflicting objectives? To the best of our knowledge, these issues with UMDA have yet to be addressed in the literature. To this end, regarding the first question, we will propose a method that iteratively divides source instances into different latent domains via clustering, thereby representing each source domain by a latent domain. To solve the issue in the second question, we will study the problem within the framework of class imbalance learning, where there are quite different numbers of instances per class in the source and target domains. Subsequently, we will design an alignment scheme to balance the label distributions across source and target domains. Besides that, we will introduce a strategy of class-conditional alignment between different domains to achieve better adaptation. As for the third question, we will formulate the UMDA as a multi- objective optimization problem, thereby finding a Pareto optimal solution of source task losses and alignment losses. By conducting this project, we will not only gain an in-depth understanding of UMDA, but also propose new models and algorithms to address the above challenges and extend the UMDA application domains. The results of this project will be applicable to various applications (e.g. object recognition, image classification, and robot learning). In addition, the underlying techniques we will develop will also benefit the development of machine learning.
|Effective start/end date||1/01/23 → …|
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.