Corpus ID: 49421402

Character-Level Feature Extraction with Densely Connected Networks

@inproceedings{Lee2018CharacterLevelFE,
  title={Character-Level Feature Extraction with Densely Connected Networks},
  author={Chanhee Lee and Young-Bum Kim and Dongyub Lee and Heuiseok Lim},
  booktitle={COLING},
  year={2018}
}
Generating character-level features is an important step for achieving good results in various natural language processing tasks. [...] Key Method The proposed method does not require any language or task specific assumptions, and shows robustness and effectiveness while being faster than CNN- or RNN-based methods. Evaluating this method on three sequence labeling tasks - slot tagging, Part-of-Speech (POS) tagging, and Named-Entity Recognition (NER) - we obtain state-of-the-art performance with a 96.62 F1-score…Expand
Banner: A Cost-Sensitive Contextualized Model for Bangla Named Entity Recognition
TLDR
This paper proposes multiple BERT-based deep learning models that use the contextualized embedding from BERT as inputs and a simple statistical approach for class weight cost sensitive learning. Expand
Detecting Emotion on Indonesian Online Chat Text Using Text Sequential Labeling
TLDR
This paper employs sequential labeling with various feature, i.e. bag of words 1-gram and 2-gram, pragmatic feature, non-textual feature, and word embedding feature, with Long Short-Term Memory (LSTM) employed as sequential labeling technique and several machine learning for non-sequential approaches. Expand
An Automatically Extracting Formal Information from Unstructured Security Intelligence Report
TLDR
A framework that uses five analytic techniques to formulate a report and extract key information in order to reduce the time required to extract information on large unstructured SIRs efficiently is proposed. Expand
Drifted Twitter Spam Classification Using Multiscale Detection Test on K-L Divergence
TLDR
Comprehensive experiments show that K–L divergence has highly consistent change patterns between features when a drift occurs and the MDDT is proved to be effective in improving final classification result in both accuracy, recall, and f-measure. Expand
Auxiliary Sequence Labeling Tasks for Disfluency Detection
TLDR
A method utilizing named entity recognition (NER) and part-of-speech (POS) as auxiliary sequence labeling (SL) tasks for disfluency detection by analyzing which auxiliary SL tasks are influential depending on baseline models is proposed. Expand

References

SHOWING 1-10 OF 59 REFERENCES
Named Entity Recognition with Bidirectional LSTM-CNNs
TLDR
A novel neural network architecture is presented that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. Expand
End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF
TLDR
A novel neutral network architecture is introduced that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF, thus making it applicable to a wide range of sequence labeling tasks. Expand
Learning Character-level Representations for Part-of-Speech Tagging
TLDR
A deep neural network is proposed that learns character-level representation of words and associate them with usual word representations to perform POS tagging and produces state-of-the-art POS taggers for two languages. Expand
Spoken language understanding using long short-term memory neural networks
TLDR
This paper investigates using long short-term memory (LSTM) neural networks, which contain input, output and forgetting gates and are more advanced than simple RNN, for the word labeling task and proposes a regression model on top of the LSTM un-normalized scores to explicitly model output-label dependence. Expand
Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling
TLDR
This paper enhances LSTM-based sequence labeling to explicitly model label dependencies and proposes another enhancement to incorporate the global information spanning over the whole input sequence to predict the label sequence. Expand
Sequential Convolutional Neural Networks for Slot Filling in Spoken Language Understanding
TLDR
A novel CNN architecture for sequence labeling is proposed which takes into account the previous context words with preserved order information and pays special attention to the current word with its surrounding context and combines the information from the past and the future words for classification. Expand
Neural Networks Leverage Corpus-wide Information for Part-of-speech Tagging
TLDR
This paper proposes a neural network approach to benefit from the non-linearity of corpuswide statistics for part-of-speech (POS) tagging, designed as a combination of a linear model for discrete features and a feed-forward neural network that captures theNon-linear interactions among the continuous features. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
Natural Language Processing (Almost) from Scratch
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entityExpand
Dynamic Feature Induction: The Last Gist to the State-of-the-Art
TLDR
A novel technique called dynamic feature induction is introduced that keeps inducing high dimensional features automatically until the feature space becomes ‘more’ linearly separable, and shows the state-of-the-art results for both tasks. Expand
...
1
2
3
4
5
...