In this paper, a new approach to self-taught learning is proposed. Classification in target task with limited labeled target data gets improved thanks to enormous unlabeled source data. The target and source data can be drawn from different distributions. In the previous approaches, covariate shift assumption is considered in which the marginal distributions change over domains and the conditional distributions remain the same. In our approach, we propose a new objective function which simultaneously learns a common space where the conditional distributions over domains remain the same and learns robust SVM classifiers for target task using both source and target data in the new representation. Hence, in the proposed objective function, the hidden label of the source data is also incorporated. We applied the proposed approach on Caltech-256 and MSRC+LMO datasets and compared the performance of our algorithm to the available competing methods. Our method has a superior performance to the successful existing algorithms.
Download Data:
New! Download MSRC+LMO features
New! Download MSRC+LMO labels