Towards Literate Artificial Intelligence

  title={Towards Literate Artificial Intelligence},
  author={Mrinmaya Sachan},
Standardized tests are used to test students as they progress in the formal education system. These tests are readily available and have clear evaluation procedures.Hence, it has been proposed that these tests can serve as good benchmarks for AI. In this thesis, we propose approaches for solving some common standardized teststaken by students such as reading comprehensions, elementary science exams, geometry questions in the SAT exam and mechanics questions in the AP physics exam.Answering… 


My Computer Is an Honor Student - but How Intelligent Is It? Standardized Tests as a Measure of AI
It is argued that machine performance on standardized tests should be a key component of any new measure of AI, because attaining a high level of performance requires solving significant AI problems involving language understanding and world modeling - critical skills for any machine that lays claim to intelligence.
Elementary School Science and Math Tests as a Driver for AI: Take the Aristo Challenge!
This work is working on a specific version of this challenge, namely having the computer pass Elementary School Science and Math exams, the most difficult requiring significant progress in AI.
Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions
This paper evaluates the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam, and shows that the overall system's score is 71.3%, an improvement of 23.8% (absolute) over the MLN-based method described in previous work.
Reading comprehension tests for computer-based understanding evaluation
A methodology for evaluation of the application of modern natural language technologies to the task of responding to RC tests is presented, based on ABCs (Abduction Based Comprehension system), an automated system for taking tests requiring short answer phrases as responses.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems.
Automatic factual question generation from text
This research supports the idea that natural language processing can help teachers efficiently create instructional content by automating the creation of specific type of assessment item and provides solutions to some of the major challenges in question generation.
Easy Questions First? A Case Study on Curriculum Learning for Question Answering
This work compares a number of curriculum learning proposals in the context of four non-convex models for QA and shows that they lead to real improvements in each of them.
Machine Comprehension using Rich Semantic Representations
A unified max-margin framework is presented that learns to find a latent mapping of the question-answer mean representation graph onto the text meaning representation graph that explains the answer, and uses what it learns to answer questions on novel texts.
Learning Answer-Entailing Structures for Machine Comprehension
A unified max-margin framework is presented that learns to find hidden structures that explain the relation between the question, correct answer, and text, and is extended to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension.
Learning to Ask: Neural Question Generation for Reading Comprehension
An attention-based sequence learning model for the task and the effect of encoding sentence- vs. paragraph-level information is investigated and results show that the system significantly outperforms the state-of-the-art rule-based system.