• Corpus ID: 7652387

What is my essay really saying? Using extractive summarization to motivate reflection and redrafting

@inproceedings{Labeke2013WhatIM,
  title={What is my essay really saying? Using extractive summarization to motivate reflection and redrafting},
  author={Nicolas van Labeke and Denise Whitelock and Debora Field and Stephen G. Pulman and John T. E. Richardson},
  booktitle={AIED Workshops},
  year={2013}
}
This paper reports on progress on the design of OpenEssayist, a web application that aims at supporting students in writing essays. The system uses techniques from Natural Language Processing to automatically extract summaries from free-text essays, such as key words and key sentences, and carries out essay structure recognition. The current design approach described in this paper has led to a more "explore and discover" environment, where several external representations of these summarization… 

Figures and Tables from this paper

Did I really mean that? Applying automatic summarisation techniques to formative feedback
TLDR
The application and adaptation of graph-based key word and key sentence ranking methods for a novel purpose, and ensuing observations concerning the suitability of two different centrality algorithms for the purposes of key word extraction are reported on.
An exploration of the features of graded student essays using domain−independent natural language processing techniques
TLDR
Observations made about a corpus of 135 graded student essays are presented by analysing them with a computer program designed to provide automated formative feedback on draft essays to suggest that some characteristics of students’ essays may be related to the grades that tutors assign to their essays.
Applying automatic summarisation techniques to formative feedback
TLDR
The application and ensuing observations concerning the suitability of two different centrality algorithms for the purposes of key word extraction and key sentence ranking methods for a novel pur- pose are reported on.
Automatic Summarization for Student Reflective Responses
TLDR
A new phrase-based highlighting scheme for automatic summarization is introduced that highlights the phrases in the human summaries and also the corresponding semantically-equivalent phrases in student responses.
Using student experience as a model for designing an automatic feedback system for short essays
TLDR
The SAFeSEA project (Supportive Automated Feedback for Short Essay Answers) aims to develop an automated feedback system to support university students as they write summative essays, finding that students consider essay-writing as a sequential set of activities and a skill that requires the development of personal strategies.
Using student experience to inform the design of an automated feedback system for essay answers.
TLDR
Empirical studies carried out in the initial phase of the SAFeSEA project suggested that students consider essay writing as: 1) a sequential set of activities, 2) a process that is enhanced through particular sources of support and 3) a skill that requires the development of personal strategies.
Functional, Frustrating and Full of Potential: Learners' Experiences of a Prototype for Automated Essay Feedback
TLDR
This study has important implications for the next phase of development, when the role of OpenEssayist in supporting students’ learning will need to be more clearly understood.
Leveraging BERT for Extractive Text Summarization on Lectures
TLDR
This paper reports on the project called Lecture Summarization Service, a python based RESTful service that utilizes the BERT model for text embeddings and KMeans clustering to identify sentences closes to the centroid for summary selection.
The challenging task of summary evaluation: an overview
TLDR
A clear up-to-date overview of the evolution and progress of summarization evaluation is provided, giving the reader useful insights into the past, present and latest trends in the automatic evaluation of summaries.
...
1
2
...

References

SHOWING 1-10 OF 27 REFERENCES
Reflections on characteristics of university students’ essays through experimentation with domain−independent natural language processing techniques.
TLDR
Observations made about a corpus of 135 graded student essays by analysing them with a computer program that is designing to provide automated formative feedback on draft essays suggest that some characteristics of students’ essays may be related to the grades that tutors assign to their essays.
Glosser: Enhanced Feedback for Student Writing Tasks
We describe Glosser, a system that supports students in writing essays by 1) scaffolding their reflection with trigger questions, and 2) using text mining techniques to provide content clues that can
Analysing Semantic Flow in Academic Writing
TLDR
A novel visualisation method for providing feedback to support formative essay assessment makes use of text mining techniques to provide insight on the semantics of the topics in an essay.
Using student experience to inform the design of an automated feedback system for essay answers.
TLDR
Empirical studies carried out in the initial phase of the SAFeSEA project suggested that students consider essay writing as: 1) a sequential set of activities, 2) a process that is enhanced through particular sources of support and 3) a skill that requires the development of personal strategies.
Helping Students Understand Courses through Written Syntheses: An LSA-Based Online Advisor
TLDR
A service is introduced, Pensum, integrated in a PLE that helps students to understand a course content through writing a synthesis, as well as teachers and tutors to manage this activity.
Summary Street: Interactive Computer Support for Writing
TLDR
In classroom trials 6th-grade students not only wrote better summaries when receiving content-based feedback from Summary Street, but also spent more than twice as long engaged in the writing task.
Analysis of collaborative writing processes using revision maps and probabilistic topic models
TLDR
This paper proposes three visualisation approaches and their underlying techniques for analysing writing processes used in a document written by a group of authors, and illustrates how these visualisations are used with real documents written by groups of graduate students.
Text summarisation in progress: a literature review
TLDR
This paper contains a large literature review in the research field of Text Summarisation (TS) based on Human Language Technologies, where the existing methodologies and systems are explained, as well as new research that has emerged concerning the automatic evaluation of summaries’ quality.
Natural Language Processing with Python
This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic
The nature of feedback: how different types of peer feedback affect writing performance
TLDR
Five main predictions were developed from the feedback literature in writing regarding feedback features as they relate to potential causal mediators of problem or solution understanding and problem or solutions agreement, leading to the final outcome of feedback implementation.
...
1
2
3
...