Corpus ID: 214728453

NukeBERT: A Pre-trained language model for Low Resource Nuclear Domain

@article{Jain2020NukeBERTAP,
  title={NukeBERT: A Pre-trained language model for Low Resource Nuclear Domain},
  author={Ayush Jain and Meenachi Ganesamoorty},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.13821}
}
Significant advances have been made in recent years on Natural Language Processing with machines surpassing human performance in many tasks, including but not limited to Question Answering. The majority of deep learning methods for Question Answering targets domains with large datasets and highly matured literature. The area of Nuclear and Atomic energy has largely remained unexplored in exploiting non-annotated data for driving industry viable applications. Due to lack of dataset, a new… Expand

References

SHOWING 1-10 OF 27 REFERENCES
Efficient, Lexicon-Free OCR using Deep Learning
  • Marcin Namysl, I. Konya
  • Computer Science
  • 2019 International Conference on Document Analysis and Recognition (ICDAR)
  • 2019
SciBERT: A Pretrained Language Model for Scientific Text
Deep contextualized word representations
...
1
2
3
...