Exploring Generative Models for Joint Attribute Value Extraction from Product Titles

@article{Roy2022ExploringGM,
  title={Exploring Generative Models for Joint Attribute Value Extraction from Product Titles},
  author={Kalyani Roy and Tapas Nayak and Pawan Goyal},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.07130}
}
Attribute values of the products are an es-sential component in any e-commerce plat-form. Attribute Value Extraction (AVE) deals with extracting the attributes of a product and their values from its title or description. In this paper, we propose to tackle the AVE task using generative frameworks. We present two types of generative paradigms, namely, word sequence-based and positional sequence-based, by formulating the AVE task as a generation problem. We conduct experiments on two datasets… 

Tables from this paper

References

SHOWING 1-10 OF 15 REFERENCES

Multimodal Joint Attribute Prediction and Value Extraction for E-commerce Product

This paper proposes a multimodal method to jointly predict product attributes and extract values from textual product descriptions with the help of the product images, and demonstrates that explicitly modeling the relationship between attributes and values facilitates the method to establish the correspondence between them.

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

BART is presented, a denoising autoencoder for pretraining sequence-to-sequence models, which matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Scaling up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title

A novel approach to support value extraction scaling up to thousands of attributes without losing performance, and explicitly model the semantic representations for attribute and title, and develop an attention mechanism to capture the interactive semantic relations in-between to enforce the framework to be attribute comprehensive.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction

A representation scheme for relation tuples which enables the decoder to generate one word at a time like machine translation models and still finds all the tuples present in a sentence with full entity names of different length and with overlapping entities.

Attribute Value Generation from Product Title using Language Models

This paper uses the large-scale pretraining of the GPT-2 and the T5 text-to-text transformer to create fine-tuned models that can effectively perform this task and achieves state-of-the-art performance for different attribute classes, which has previously required a diverse set of models.

AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding

AdaTag is presented, which uses adaptive decoding to handle extraction of product attribute values through a hypernetwork and a Mixture-of-Experts (MoE) module, and allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes.

Learning to Extract Attribute Value from Product via Question Answering: A Multi-task Approach

This work proposes a novel approach for Attribute Value Extraction via Question Answering (AVEQA) using a multi-task framework which treats each attribute as a question and identifies the answer span corresponding to the attribute value in the product context.

TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories

This paper proposes TXtract, a taxonomy-aware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy that is both scalable, as it trains a single model for thousands of categories, and effective, asIt extracts category-specific attribute values.