Cross-lingual Spoken Language Understanding with Regularized Representation Alignment

@inproceedings{Liu2020CrosslingualSL,
  title={Cross-lingual Spoken Language Understanding with Regularized Representation Alignment},
  author={Zihan Liu and Genta Indra Winata and Peng Xu and Zhaojiang Lin and Pascale Fung},
  booktitle={EMNLP},
  year={2020}
}
Despite the promising results of current cross-lingual models for spoken language understanding systems, they still suffer from imperfect cross-lingual representation alignments between the source and target languages, which makes the performance sub-optimal. To cope with this issue, we propose a regularization approach to further align word-level and sentence-level representations across languages without any external resource. First, we regularize the representation of user utterances based… 

Figures and Tables from this paper

Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding

TLDR
This paper proposes to exploit the “utterance-slot-word” structure of SLU and systematically model this structure by a multi-level contrastive learning framework at the utterance, slot, and word levels, and develops a label-aware joint model to leverage label semantics for cross-lingual knowledge transfer.

On the Importance of Word Order Information in Cross-lingual Sequence Labeling

TLDR
This paper hypothesizes that reducing the word order information fitted into the models can improve the adaptation performance in target languages and introduces several methods to make models encode less word order Information of the source language.

Call Larisa Ivanovna: Code-Switching Fools Multilingual NLU Models

TLDR
It is reported that the state-of-the-art NLU models are unable to handle code-switching, and it is shown that the closer the languages are, the better the NLU model handles their alternation.

Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding

TLDR
This paper develops a denoising training approach that outperforms the existing state of the art by 3.05 and 4.24 percentage points on two benchmark datasets, respectively.

Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph

TLDR
Dictionary-based heterogeneous graph neural network (DHGNet) is proposed that effectively handles the heterogeneity of DHG by two-step aggregations, which are word-level and language-level aggregations.

XCode: Towards Cross-Language Code Representation with Large-Scale Pre-Training

TLDR
A novel Cross-language Code representation with a large-scale pre-training (XCode) method that uses several abstract syntax trees and ELMo-enhanced variational autoencoders to obtain multiple pre-trained source code language models trained on about 1.5 million code snippets and a Shared Encoder-Decoder architecture which uses the multi-teacher single-student method to transfer knowledge.

Did You Enjoy the Last Supper? An Experimental Study on Cross-Domain NER Models for the Art Domain

TLDR
This paper studies a selection of cross-domain NER models and evaluates them for use in the art domain, particularly for recognizing artwork titles in digitized art-historic documents.

CrossNER: Evaluating Cross-Domain Named Entity Recognition

TLDR
Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre- training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and the proposed method can consistently outperform existing cross-domain NER baselines.

93 Robust Cross-lingual Task-oriented Dialogue

TLDR
This research presents a novel and scalable approach to solve the challenge of directly simulating human-computer interaction in the real-time.

ECO-DST: An Efficient Cross-lingual Dialogue State Tracking Framework

TLDR
A novel data efficient cross-lingual DST framework (ECO-DST), which consists of cross-lingsual encoder and language independent decoder, which achieves state-of-the-art result on CrossWOz dataset and promising result on MultiWOZ 2.1 dataset.

References

SHOWING 1-10 OF 33 REFERENCES

Zero-shot Cross-lingual Dialogue Systems with Transferable Latent Variables

TLDR
A zero-shot adaptation of task-oriented dialogue system to low-resource languages to cope with the variance of similar sentences across different languages, which is induced by imperfect cross-lingual alignments and inherent differences in languages is proposed.

(Almost) Zero-Shot Cross-Lingual Spoken Language Understanding

TLDR
Different approaches to train a SLU component with little supervision for two new languages - Hindi and Turkish are examined, and it is shown that with only a few hundred labeled examples the authors can surpass the approaches proposed in the literature.

Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog

TLDR
This paper presents a new data set of 57k annotated utterances in English, Spanish, Spanish and Thai and uses this data set to evaluate three different cross-lingual transfer methods, finding that given several hundred training examples in the the target language, the latter two methods outperform translating the training data.

CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot Cross-Lingual NLP

TLDR
A data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages model to align representations from source and multiple target languages once by mixing their context information.

Cross-lingual Language Model Pretraining

TLDR
This work proposes two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingsual language model objective.

Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks

TLDR
It is found that doing fine-tuning on multiple languages together can bring further improvement in Unicoder, a universal language encoder that is insensitive to different languages.

Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems

TLDR
Attention-Informed Mixed-Language Training (MLT) is introduced, a novel zero-shot adaptation method for cross-lingual task-oriented dialogue systems that leverages very few task-related parallel word pairs to generate code-switching sentences for learning the inter-lingUAL semantics across languages.

Multilingual Seq2seq Training with Similarity Loss for Cross-Lingual Document Classification

TLDR
This framework introduces a simple method of adding a loss to the learning objective which penalizes distance between representations of bilingually aligned sentences, and finds the similarity loss significantly improves performance on both cross-lingual transfer and document classification.

Word Translation Without Parallel Data

TLDR
It is shown that a bilingual dictionary can be built between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way.

Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning

TLDR
The method achieves better performance than other fine-tuning baselines on zero-shot cross-lingual part-of-speech tagging and named entity recognition tasks and preserves the original cross- Lingual ability of the pre-trained model when the authors fine-tune it to downstream cross-lingsual tasks.