BioNLP-OST 2019 RDoC Tasks: Multi-grain Neural Relevance Ranking Using Topics and Attention Based Query-Document-Sentence Interactions

@article{Chaudhary2019BioNLPOST2R,
  title={BioNLP-OST 2019 RDoC Tasks: Multi-grain Neural Relevance Ranking Using Topics and Attention Based Query-Document-Sentence Interactions},
  author={Yatin Chaudhary and Pankaj Gupta and Hinrich Sch{\"u}tze},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.00314}
}
This paper presents our system details and results of participation in the RDoC Tasks of BioNLP-OST 2019. Research Domain Criteria (RDoC) construct is a multi-dimensional and broad framework to describe mental health disorders by combining knowledge from genomics to behaviour. Non-availability of RDoC labelled dataset and tedious labelling process hinders the use of RDoC framework to reach its full potential in Biomedical research community and Healthcare industry. Therefore, Task-1 aims at… Expand
2 Citations
List-wise learning to rank biomedical question-answer pairs with deep ranking recursive autoencoders
TLDR
This paper presents a deep ranking recursive autoencoders (rankingRAE) architecture for ranking question-candidate snippet answer pairs to obtain the most relevant candidate answers for biomedical questions extracted from the potentially relevant documents. Expand
Using Generative Adversarial Networks for Relevance Evaluation of Search Engine Results
TLDR
A new approach to the problem of relevance evaluation of the search engine results, based on generative adversarial networks (GAN), is proposed and the results clearly demonstrate the principal possibility and feasibility of using the described approach, despite the fact of used models being simplistic. Expand

References

SHOWING 1-10 OF 22 REFERENCES
BioWordVec, improving biomedical word embeddings with subword information and MeSH
TLDR
This work presents BioWordVec: an open set of biomedical word vectors/embeddings that combines subword information from unlabeled biomedical text with a widely-used biomedical controlled vocabulary called Medical Subject Headings (MeSH). Expand
Deep Relevance Ranking Using Enhanced Document-Query Interactions
TLDR
Several new models for document relevance ranking are explored, building upon the Deep Relevance Matching Model (DRMM) of Guo et al. (2016), and inspired by PACRR’s convolutional n-gram matching features, but extended in several ways including multiple views of query and document inputs. Expand
Distributional Semantics Resources for Biomedical Text Processing
TLDR
This study introduces the first set of such language resources created from analysis of the entire available biomedical literature, including a dataset of all 1to 5-grams and their probabilities in these texts and new models of word semantics. Expand
Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction
TLDR
A Table Filling Multi-Task Recurrent Neural Network model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies and shows that a simple approach of piggybacking candidate entities to model the label dependencies from relations to entities improves performance. Expand
The Probabilistic Relevance Framework: BM25 and Beyond
TLDR
This work presents the PRF from a conceptual point of view, describing the probabilistic modelling assumptions behind the framework and the different ranking algorithms that result from its application: the binary independence model, relevance feedback models, BM25 and BM25F. Expand
Replicated Siamese LSTM in Ticketing System for Similarity Learning and Retrieval in Asymmetric Texts
TLDR
A novel Replicated Siamese LSTM model is presented to learn similarity in asymmetric text pairs, that gives 22% and 7% gain (Accuracy@10) for retrieval task, respectively over unsupervised and supervised baselines. Expand
textTOvec: Deep Contextualized Neural Autoregressive Models of Language with Distributed Compositional Prior
TLDR
Novel neural autoregressive topic model variants coupled with neural LMs and embeddings priors that consistently outperform state-of-the-art generative TMs in terms of generalization, interpretability and applicability over 6 long-text and 8 short-text datasets from diverse domains are presented. Expand
Autoencoding Variational Inference For Topic Models
TLDR
This work presents what is to their knowledge the first effective AEVB based inference method for latent Dirichlet allocation (LDA), which it is called Autoencoded Variational Inference For Topic Model (AVITM). Expand
Document Informed Neural Autoregressive Topic Models with Distributional Prior
TLDR
Novel neural autoregressive topic model variants that consistently outperform state-of-the-art generative topic models in terms of generalization, interpretability, and applicability over 7 long-text and 8 short-text datasets from diverse domains are presented. Expand
LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation
TLDR
This work analyzes and interpret the cumulative nature of RNN via a proposed technique named as Layer-wIse-Semantic-Accumulation (LISA) for explaining decisions and detecting the most likely saliency patterns that the network relies on while decision making. Expand
...
1
2
3
...