Regressing Location on Text for Probabilistic Geocoding
@article{Radford2021RegressingLO, title={Regressing Location on Text for Probabilistic Geocoding}, author={Benjamin J. Radford}, journal={ArXiv}, year={2021}, volume={abs/2107.00080} }
Text data are an important source of detailed information about social and political events. Automated systems parse large volumes of text data to infer or extract structured information that describes actors, actions, dates, times, and locations. One of these sub-tasks is geocoding: predicting the geographic coordinates associated with events or locations described by a given text. I present an end-to-end probabilistic model for geocoding text data. Additionally, I collect a novel data set for…
One Citation
Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021): Workshop and Shared Task Report
- Computer ScienceCASE
- 2021
This workshop is the fourth issue of a series of workshops on automatic extraction of socio-political events from news, organized by the Emerging Market Welfare Project, with the support of the Joint…
References
SHOWING 1-10 OF 11 REFERENCES
Lost in Space: Geolocation in Event Data
- Computer SciencePolitical Science Research and Methods
- 2018
A two-stage supervised machine-learning algorithm that evaluates each location mention to be either correct or incorrect is introduced and shows that the proposed algorithm outperforms existing geocoders even in a case added post hoc to test the generality of the developed algorithm.
Mordecai: Full Text Geoparsing and Event Geocoding
- Computer ScienceJ. Open Source Softw.
- 2017
Mordecai is a new full-text geoparsing system that extracts place names from text, resolves them to their correct entries in a gazetteer, and returns structured geographic information for the…
Spatial Language Representation with Multi-Level Geocoding
- Computer ScienceArXiv
- 2020
A multi-level geocoding model (MLG) that learns to associate texts to geographic locations and can effectively learn the connection between text spans and coordinates - and thus can be extended to toponymns not present in knowledge bases.
Distributed Representations of Words and Phrases and their Compositionality
- Computer ScienceNIPS
- 2013
This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Cross-Lingual Ability of Multilingual BERT: An Empirical Study
- Computer Science, LinguisticsICLR
- 2020
A comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability, finding that the lexical overlap between languages plays a negligible role, while the depth of the network is an integral part of it.
Adam: A Method for Stochastic Optimization
- Computer ScienceICLR
- 2015
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Attention is All you Need
- Computer ScienceNIPS
- 2017
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
RoBERTa: A Robustly Optimized BERT Pretraining Approach
- Computer ScienceArXiv
- 2019
It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Regressing Location on Text for Probabilistic Geocoding.
- Proceedings of the CASE Workshop at ACL-IJCNLP
- 2021
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
- Computer ScienceArXiv
- 2019
This work proposes a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can be fine-tuned with good performances on a wide range of tasks like its larger counterparts, and introduces a triple loss combining language modeling, distillation and cosine-distance losses.