Towards Literate Artificial Intelligence

@inproceedings{Sachan2020TowardsLA,
  title={Towards Literate Artificial Intelligence},
  author={Mrinmaya Sachan},
  year={2020}
}
Standardized tests are used to test students as they progress in the formal education system. These tests are readily available and have clear evaluation procedures.Hence, it has been proposed that these tests can serve as good benchmarks for AI. In this thesis, we propose approaches for solving some common standardized teststaken by students such as reading comprehensions, elementary science exams, geometry questions in the SAT exam and mechanics questions in the AP physics exam.Answering… 

References

SHOWING 1-10 OF 346 REFERENCES
My Computer Is an Honor Student - but How Intelligent Is It? Standardized Tests as a Measure of AI
TLDR
It is argued that machine performance on standardized tests should be a key component of any new measure of AI, because attaining a high level of performance requires solving significant AI problems involving language understanding and world modeling - critical skills for any machine that lays claim to intelligence.
Elementary School Science and Math Tests as a Driver for AI: Take the Aristo Challenge!
TLDR
This work is working on a specific version of this challenge, namely having the computer pass Elementary School Science and Math exams, the most difficult requiring significant progress in AI.
Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions
TLDR
This paper evaluates the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam, and shows that the overall system's score is 71.3%, an improvement of 23.8% (absolute) over the MLN-based method described in previous work.
Reading comprehension tests for computer-based understanding evaluation
TLDR
A methodology for evaluation of the application of modern natural language technologies to the task of responding to RC tests is presented, based on ABCs (Abduction Based Comprehension system), an automated system for taking tests requiring short answer phrases as responses.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
TLDR
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems.
Automatic factual question generation from text
TLDR
This research supports the idea that natural language processing can help teachers efficiently create instructional content by automating the creation of specific type of assessment item and provides solutions to some of the major challenges in question generation.
Easy Questions First? A Case Study on Curriculum Learning for Question Answering
TLDR
This work compares a number of curriculum learning proposals in the context of four non-convex models for QA and shows that they lead to real improvements in each of them.
Machine Comprehension using Rich Semantic Representations
TLDR
A unified max-margin framework is presented that learns to find a latent mapping of the question-answer mean representation graph onto the text meaning representation graph that explains the answer, and uses what it learns to answer questions on novel texts.
Learning Answer-Entailing Structures for Machine Comprehension
TLDR
A unified max-margin framework is presented that learns to find hidden structures that explain the relation between the question, correct answer, and text, and is extended to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension.
Learning to Ask: Neural Question Generation for Reading Comprehension
TLDR
An attention-based sequence learning model for the task and the effect of encoding sentence- vs. paragraph-level information is investigated and results show that the system significantly outperforms the state-of-the-art rule-based system.
...
1
2
3
4
5
...