Adaptive Multi-view Rule Discovery for Weakly-Supervised Compatible Products Prediction

@article{Zhang2022AdaptiveMR,
  title={Adaptive Multi-view Rule Discovery for Weakly-Supervised Compatible Products Prediction},
  author={Rongzhi Zhang and Rebecca West and Xiquan Cui and Chao Zhang},
  journal={Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
  year={2022}
}
On e-commerce platforms, predicting if two products are compatible with each other is an important functionality to achieve trustworthy product recommendation and search experience for consumers. However, accurately predicting product compatibility is difficult due to the heterogeneous product data and the lack of manually curated training data. We study the problem of discovering effective labeling rules that can enable weakly-supervised product compatibility prediction. We develop AMRule, a… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 44 REFERENCES

NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction

A neural approach to ground rules for RE is presented, named Nero, which jointly learns a relation extraction module and a soft matching module that learns to match rules with semantically similar sentences such that raw corpora can be automatically labeled and leveraged by the RE module (in a much better coverage) as augmented supervision.

Weakly Supervised Co-Training of Query Rewriting andSemantic Matching for e-Commerce

This study investigates the instinctive connection between query rewriting and semantic matching tasks, and proposes a co-training framework to address the data sparseness problem when training deep neural networks.

STEAM: Self-Supervised Taxonomy Expansion with Mini-Paths

A self-supervised taxonomy expansion model named STEAM is proposed, which leverages natural supervision in the existing taxonomy for expansion, and outperforms state-of-the-art methods forTaxonomy expansion by 11.6% in accuracy and 7.0% in mean reciprocal rank on three public benchmarks.

Product1M: Towards Weakly Supervised Instance-Level Product Retrieval via Cross-Modal Pretraining

A novel model named Cross-modal contrAstive Product Transformer for instance-level prodUct REtrieval (CAPTURE) is proposed, that excels in capturing the potential synergy between multi- modal inputs via a hybrid-stream transformer in a self-supervised manner.

Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling

This work develops the first framework for interactive weak supervision in which a method proposes heuristics and learns from user feedback given on each proposed heuristic, demonstrating that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels.

Adaptive Rule Discovery for Labeling Text Data

DARWIN, an interactive system designed to alleviate the task of writing rules for labeling text data in weakly-supervised settings, is presented and it is demonstrated that rules discovered by DARWIN on average identify 40% more positive instances compared to Snuba even when it is provided with 1000 labeled instances.

Learning from Rules Generalizing Labeled Exemplars

A training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables, shows that it is more accurate than several existing methods of learning from a mix of clean and noisy supervision.

Direct mining of discriminative and essential frequent patterns via model-based search tree

This paper proposes a new and different method for frequent pattern mining that builds a decision tree that partitions the data onto different nodes, and directly discovers a discriminative pattern at each node to further divide its examples into purer subsets.

Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data Programming

Nemo is presented, an end-to-end interactive system that improves the overall productivity of WS learning pipeline by an average 20% (and up to 47% in one task) compared to the prevailing WS approach.

Understanding Programmatic Weak Supervision via Source-aware Influence Function

This work builds on Influence Function (IF) and proposes source-aware IF, which leverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculates the influence associated with each data, source, class tuple.