Transition-based Adversarial Network for Cross-lingual Aspect Extraction

@inproceedings{Wang2018TransitionbasedAN,
  title={Transition-based Adversarial Network for Cross-lingual Aspect Extraction},
  author={Wenya Wang and Sinno Jialin Pan},
  booktitle={IJCAI},
  year={2018}
}
In fine-grained opinion mining, the task of aspect extraction involves the identification of explicit product features in customer reviews. This task has been widely studied in some major languages, e.g., English, but was seldom addressed in other minor languages due to the lack of annotated corpus. To solve it, we develop a novel deep model to transfer knowledge from a source language with labeled training data to a target language without any annotations. Different from cross-lingual… 

Figures and Tables from this paper

Cross-lingual Multi-Level Adversarial Transfer to Enhance Low-Resource Name Tagging

TLDR
A new neural architecture is developed that leverages multi-level adversarial transfer that projects source language words into the same semantic space as those of the target language without using any parallel corpora or bilingual gazetteers and yields language-agnostic sequential features.

CL-XABSA: Contrastive Learning for Cross-lingual Aspect-based Sentiment Analysis

TLDR
A novel framework, CL-XABSA: Contrastive Learning for Cross-lingual Aspect-Based Sentiment Analysis, designed to regularize the semantic space of source and target language to be more uniform and demonstrate that the proposed method has a certain improvement in the three tasks of XABSA, distillation X ABSA and MABSA.

Cross-lingual Aspect-based Sentiment Analysis with Aspect Term Code-Switching

TLDR
This paper considers the unsupervised cross-lingual transfer for the ABSA task, where only labeled data in the source language is available and aims at transferring its knowledge to the target language having no labeled data and proposes an alignment-free label projection method.

Cross-Lingual Dependency Parsing with Unlabeled Auxiliary Languages

TLDR
This work explores adversarial training for learning contextual encoders that produce invariant representations across languages to facilitate cross-lingual transfer and proposes to leverage unannotated sentences from auxiliary languages to help learning language-agnostic representations.

Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference

TLDR
An adversarial training framework to enhance both pre-trained models and classical neural models for cross-lingual natural language inference is proposed, which demonstrates that three popular neural models enhanced by the proposed framework significantly outperform the original models.

Multi-task Learning of Negation and Speculation for Targeted Sentiment Classification

TLDR
A multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection, to create English-language models that are more robust to these phenomena.

A Span-based Joint Model for Opinion Target Extraction and Target Sentiment Classification

TLDR
This model first emulates spans with one or more tokens and learns their representation based on the tokens inside, and then, a span-aware attention mechanism is designed to compute the sentiment information towards each span.

2 Aspect Extraction with Sememe Attentions ( AESA )

TLDR
An unsupervised neural framework is presented that leverages sememes to enhance lexical semantics and reconstructs sentence representations and learns aspects by latent variables, analogous to an autoenoder.

A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges

TLDR
A new taxonomy for ABSA is provided which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks.

References

SHOWING 1-10 OF 31 REFERENCES

Adversarial Deep Averaging Networks for Cross-Lingual Sentiment Classification

TLDR
An Adversarial Deep Averaging Network (ADAN1) is proposed to transfer the knowledge learned from labeled data on a resource-rich source language to low-resource languages where only unlabeled data exist.

A Subspace Learning Framework for Cross-Lingual Sentiment Classification with Partial Parallel Data

TLDR
A novel subspace learning framework is proposed by leveraging the partial parallel data for cross-lingual sentiment classification by jointly learning the document-aligned review data and un-aligned data from the source language and the target language via a non-negative matrix factorization framework.

Cross-Lingual Mixture Model for Sentiment Classification

TLDR
This paper proposes a generative cross-lingual mixture model (CLMM) to leverage unlabeled bilingual parallel data and learns previously unseen sentiment words from the large bilingual Parallel data and improves vocabulary coverage significantly.

Unsupervised Word and Dependency Path Embeddings for Aspect Term Extraction

TLDR
A novel approach to aspect term extraction based on unsupervised learning of distributed representations of words and dependency paths, where the multi-hop dependency paths are treated as a sequence of grammatical relations and modeled by a recurrent neural network.

Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings

TLDR
This work proposes a general class of discriminative models based on recurrent neural networks and word embeddings that can be successfully applied to fine-grained opinion mining tasks without any taskspecific feature engineering effort.

Cross-Language Text Classification Using Structural Correspondence Learning

We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled

Cross-Lingual Sentiment Classification with Bilingual Document Representation Learning

TLDR
This study proposes a representation learning approach which simultaneously learns vector representations for the texts in both the source and the target languages and shows that BiDRL outperforms the state-of-the-art methods for all the target language methods.

Extracting Opinion Targets in a Single and Cross-Domain Setting with Conditional Random Fields

TLDR
This paper model the problem as an information extraction task, which is addressed based on Conditional Random Fields (CRF), and employs the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data.

Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis

TLDR
A novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction and is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance.

An Autoencoder Approach to Learning Bilingual Word Representations

TLDR
This work explores the use of autoencoder-based methods for cross-language learning of vectorial word representations that are coherent between two languages, while not relying on word-level alignments, and achieves state-of-the-art performance.