Critical Sentence Identification in Legal Cases Using Multi-Class Classification

@article{Jayasinghe2021CriticalSI,
  title={Critical Sentence Identification in Legal Cases Using Multi-Class Classification},
  author={Sahan Jayasinghe and Lakith Rambukkanage and Ashan Silva and Nisansa de Silva and Amal Perera},
  journal={2021 IEEE 16th International Conference on Industrial and Information Systems (ICIIS)},
  year={2021},
  pages={146-151}
}
Inherently, the legal domain contains a vast amount of data in text format. Therefore it requires the application of Natural Language Processing (NLP) to cater to the analytically demanding needs of the domain. The advancement of NLP is spreading through various domains, such as the legal domain, in forms of practical applications and academic research. Identifying critical sentences, facts and arguments in a legal case is a tedious task for legal professionals. In this research we explore the… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 21 REFERENCES
Identifying Legal Party Members from Legal Opinion Texts Using Natural Language Processing
Law and order is a field that can highly benefit from the contribution of Natural Language Processing (NLP) to its betterment. An area in which NLP can be of immense help is for information retrieval
Legal Party Extraction from Legal Opinion Text with Sequence to Sequence Learning
In the field of natural language processing, domain specific information retrieval using given documents has been a prominent and ongoing research area. The automatic extraction of the legal parties
Party Identification of Legal Documents using Co-reference Resolution and Named Entity Recognition
TLDR
This study combined several existing natural language processing annotators to achieve the goal of extracting legal parties in a given court case document and evaluated with manually labelled court case paragraphs to demonstrate that the system is successful in identifying legal parties.
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
TLDR
A benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models, which favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks.
Supervised Learning of Universal Sentence Representations from Natural Language Inference Data
TLDR
It is shown how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks.
Extracting Important Sentences with Support Vector Machines
TLDR
This paper proposes a method of sentence extraction based on Support Vector Machines (SVMs) and confirms the method's performance, and clarifies the different features effective for extracting different document genres.
Sentence Extraction Based Single Document Summarization
The need for text summarization is crucial as we enter the era of information overload. In this paper we present an automatic summarization system, which generates a summary for a given input
Synergistic union of Word2Vec and lexicon for domain specific semantic similarity
TLDR
This study introduces a domain specific semantic similarity measure that was created by the synergistic union of word2vec, a word embedding method that is used for semantic similarity calculation and lexicon based (lexical) semantic similarity methods.
Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification
TLDR
Three neural networks are developed to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions and the performance of SSWE is improved by concatenating SSWE with existing feature set.
Universal Sentence Encoder
TLDR
It is found that transfer learning using sentence embeddings tends to outperform word level transfer with surprisingly good performance with minimal amounts of supervised training data for a transfer task.
...
1
2
3
...