Learn More
Transfer learning has been proposed to address the problem of scarcity of labeled data in the target domain by leveraging the data from the source domain. In many real world applications, data is often represented from different perspectives, which correspond to multiple views. For example, a web page can be described by its contents and its associated(More)
The idea of local learning, i.e., classifying a particular example based on its neighbors, has been successfully applied to many semi-supervised and clustering problems recently. However, the local learning methods developed so far are all devised for single-view problems. In fact, in many real-world applications, examples are represented by multiple sets(More)
In Multiple Instance Learning (MIL), each entity is normally expressed as a set of instances. Most of the current MIL methods only deal with the case when each instance is represented by one type of features. However, in many real world applications, entities are often described from several different information sources/views. For example, when applying(More)
Transfer Learning is a very important branch in both Machine Learning and Data Mining. Its main objective is to transfer knowledge across domains, tasks and distributions that are similar but not the same. Currently, almost all of the transfer learning methods are designed to deal with the traditional single instance learning problems. However, in many(More)
In this paper, we propose a Semi-Supervised Multiple-Instance Learning (SSMIL) algorithm, and apply it to Localized Content-Based Image Retrieval(LCBIR), where the goal is to rank all the images in the database, according to the object that users want to retrieve. SSMIL treats LCBIR as a Semi-Supervised Problem and utilize the un-labeled pictures to help(More)
This paper proposes blind source extraction methods based on several time-delay autocorrelations of primary sources, called MACBSE. The MACBSE approaches are batch fixed-point learning algorithms for extraction of source signals with linear autocorrelations. The fixed-point algorithms are very simple and do not need choose any learning step sizes.(More)