Share This Author
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics, is introduced and the description of the data for the 2021 shared task at the associated GEM Workshop is described.
Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning
- Tosin P. Adewumi, N. Abid, M. Liwicki
- Computer ScienceProceedings of the Northern Lights Deep Learning…
- 12 October 2021
DolyGPT, an English language pre-trained model, is adapted by training on three different Swedish language conversational datasets obtained from publicly available sources, with results that indicate that the capacity for transfer learning can be exploited with considerable success.
Potential Idiomatic Expression (PIE)-English: Corpus for Classes of Idioms
This is the first idioms corpus with classes of idioms beyond the literal and the general idioms classification and makes publicly available the corpus and the relevant codes for working with it for NLP tasks.
MasakhaNER: Named Entity Recognition for African Languages
- David Ifeoluwa Adelani, Jade Z. Abbott, Salomey Osei
- Computer ScienceTransactions of the Association for Computational…
- 22 March 2021
This work brings together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages and details the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks.
Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks
A parallel version of the Word2Vec model for natural language processing (NLP) tasks that automates the very labor-intensive and therefore time-heavy and expensive initialization of deep neural networks.
Conversational Systems in Machine Learning from the Point of View of the Philosophy of Science—Using Alime Chat and Related Studies
This essay examines the research practices from, among others, Longino's view on objectivity and Popper’s stand on falsification and concludes that open data and open scientific discussion fora should become more prominent over the mere publication-focused trend.
Inner loop program construct: A faster way for program execution
- Tosin P. Adewumi
- Computer ScienceOpen Comput. Sci.
- 1 July 2018
This research sought to find out if there is any speed difference in a single loop of computations and a loop with an inner loop of same computations, and established that, across all languages, there were more computations performed per unit time with aninner for-loop than no inner loop.
Understanding the Role of Objectivity in Machine Learning and Research Evaluation
The case for more objectivity in Machine Learning (ML) research is made, some of the current challenges are discussed, the role ofobjectivity in the two elements that are up for consideration in ML and recommendations to support the research community are made.
Inner For-Loop for Speeding Up Blockchain Mining
Comparison shows that an inner for-loop for the population-based approach is a slightly faster approach than brute force, with an average speed advantage of about 1.67% or 3,420 iterations per second and 73% of the time performing better.
Corpora Compared: The Case of the Swedish Gigaword & Wikipedia Corpora
It is shown that the difference in performance of embeddings from differently sourced data for a given language can be due to other factors besides data size, such as broadness of covered domain and noise.