Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again
@article{Gutierrez2022ThinkingAG, title={Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again}, author={Bernal Jimenez Gutierrez and Nikolas McNeal and Clay Washington and You Chen and Lang Li and Huan Sun and Yu Su}, journal={ArXiv}, year={2022}, volume={abs/2203.08410} }
The strong few-shot in-context learning ca-pability of large pre-trained language models (PLMs) such as GPT-3 is highly appealing for application domains such as biomedicine, which feature high and diverse demands of language technologies but also high data annotation costs. In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on two highly representative biomedical…
Figures and Tables from this paper
2 Citations
Large Language Models are Zero-Shot Clinical Information Extractors
- Computer ScienceArXiv
- 2022
It is shown that large language models, such as GPT-3, perform well at zero-shot information extraction from clinical text despite not being trained specifically for the clinical domain, and that good resolvers share common components (e.g., “safety checks” that ensure the language model outputs faithfully match the input data).
PolarFly: A Cost-Effective and Flexible Low-Diameter Topology
- Computer ScienceArXiv
- 2022
This is the first known diameter-2 topology that asymptotically reaches the Moore bound on the number of nodes for a given network degree and diameter, and it outperforms competitive networks in terms of scalability, cost and performance for various traffic patterns.
References
SHOWING 1-10 OF 46 REFERENCES
GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain
- Computer ScienceArXiv
- 2021
This study investigates the performance of two powerful transformer language models, i.e. GPT-3 and BioBERT, in few-shot settings on various biomedical NLP tasks and suggests that language models may largely benefit from in-domain pretraining in task-specific few- shot learning.
SciFive: a text-to-text transformer model for biomedical literature
- Computer ScienceArXiv
- 2021
The SciFive model outperforms the current SOTA methods on tasks in named entity relation, relation extraction, natural language inference, and questionanswering and shows that text-generation methods have significant potential in a broad array of biomedical NLP tasks, particularly those requiring longer, more complex outputs.
Exploring a Unified Sequence-To-Sequence Transformer for Medical Product Safety Monitoring in Social Media
- Computer ScienceEMNLP
- 2021
This paper frames AE Detection and Extraction as a sequence-to-sequence problem using the T5 model architecture and achieves strong performance improvements over competitive baselines on several English benchmarks, increasing model robustness, leading to further performance gains.
Publicly Available Clinical BERT Embeddings
- Computer ScienceProceedings of the 2nd Clinical Natural Language Processing Workshop
- 2019
This work explores and releases two BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically, and demonstrates that using a domain-specific model yields performance improvements on 3/5 clinical NLP tasks, establishing a new state-of-the-art on the MedNLI dataset.
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets
- Computer ScienceBioNLP@ACL
- 2019
The Biomedical Language Understanding Evaluation (BLUE) benchmark is introduced to facilitate research in the development of pre-training language representations in the biomedicine domain and it is found that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results.
Making Pre-trained Language Models Better Few-shot Learners
- Computer ScienceACL
- 2021
The LM-BFF approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning.
True Few-Shot Learning with Prompts—A Real-World Perspective
- Computer ScienceTransactions of the Association for Computational Linguistics
- 2022
An extensive study of Pet, a method that combines textual instructions with example-based finetuning, shows that, if correctly configured, Pet performs strongly in true few-shot settings without a dev set and underpin the belief that learning from instructions will play an important role on the path towards human-like few- shot learning capabilities.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
- Computer Science, BiologyBioinform.
- 2020
This article introduces BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora that largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre- trained on biomedical Corpora.
SciBERT: A Pretrained Language Model for Scientific Text
- Computer ScienceEMNLP
- 2019
SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.
Calibrate Before Use: Improving Few-Shot Performance of Language Models
- Computer ScienceICML
- 2021
This work first estimates the model's bias towards each answer by asking for its prediction when given the training prompt and a content-free test input such as "N/A", and then fits calibration parameters that cause the prediction for this input to be uniform across answers.