Customizing Contextualized Language Models for Legal Document Reviews

@article{Shaghaghian2020CustomizingCL,
  title={Customizing Contextualized Language Models for Legal Document Reviews},
  author={Shohreh Shaghaghian and Luna Feng and Borna Jafarpour and Nicolai Pogrebnyakov},
  journal={2020 IEEE International Conference on Big Data (Big Data)},
  year={2020},
  pages={2139-2148}
}
Inspired by the inductive transfer learning on computer vision, many efforts have been made to train contextualized language models that boost the performance of natural language processing tasks. These models are mostly trained on large general-domain corpora such as news, books, or Wikipedia. Although these pre-trained generic language models well perceive the semantic and syntactic essence of a language structure, exploiting them in a real-world domain-specific scenario still needs some… 

Figures and Tables from this paper

To tune or not to tune?: zero-shot models for legal case entailment

These experiments confirm a counter-intuitive result in the new paradigm of pretrained language models: given limited labeled data, models with little or no adaption to the target task can be more robust to changes in the data distribution than models fine-tuned on it.

Semantic Role Labelling for Dutch Law Texts

A method to extract structured representations in the Flint language (van Doesburg and van Engers, 2019) from natural language by using a rule-based and transformer-based method, which indicates that the transformer- based method is a promising approach of automatically extracting Flint frames.

Extracting Structured Knowledge from Dutch Legal Texts: A Rule-based Approach

Legal texts are difficult to interpret, and its interpretation depends on the knowledge and experience of the legal expert. Formalising interpretations can improve transparency. However, creating

A Vector Institute Industry Collaborative Project A Technical Report Harnessing the Power of Natural Language Processing (NLP):

This report provides an overview of the collaboration between the Vector Institute (Vector) and some of its industrial partners in that project to be as close as possible to the original GPT-2 small model.

Text Classification of Modern Mongolian Documents Using BERT Models

This paper investigates the application of state-of-the-art deep-learning-based natural language processing techniques in modern Mongolian documents and proposes BERT-based models called LEGAL-BERT-Mongolian, which demonstrate a certain degree of confusion among the “legal,’ “economy,” and “politics” categories.

References

SHOWING 1-10 OF 33 REFERENCES

Deep Contextualized Word Representations

A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.

Universal Language Model Fine-tuning for Text Classification

This work proposes Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduces techniques that are key for fine- Tuning a language model.

FinBERT: Financial Sentiment Analysis with Pre-trained Language Models

FinBERT, a language model based on BERT, is introduced to tackle NLP tasks in the financial domain and it is found that even with a smaller training set and fine-tuning only a part of the model, FinBERT outperforms state-of-the-art machine learning methods.

LSTM-based Deep Learning Models for non-factoid answer selection

A general deep learning framework is applied for the answer selection task, which does not depend on manually defined features or linguistic tools, and is extended in two directions to define a more composite representation for questions and answers.

SciBERT: Pretrained Contextualized Embeddings for Scientific Text

SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.

Classifying sentential modality in legal language: a use case in financial regulations, acts and directives

This paper outlines a data-driven approach by classifying deontic modalities using ensembled Artificial Neural Networks that incorporate domain specific legal distributional semantic model (DSM) representations, in combination with, a general DSM representation.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

HuggingFace's Transformers: State-of-the-art Natural Language Processing

The \textit{Transformers} library is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.

LEGAL-BERT: The Muppets straight out of Law School

This work proposes a broader hyper-parameter search space when fine-tuning for downstream tasks and releases LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.

BERT Goes to Law School: Quantifying the Competitive Advantage of Access to Large Legal Corpora in Contract Understanding

This paper shows that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain, and shows that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts.